Obliviousness Makes Poisoning Adversaries Weaker

03/26/2020
by   Sanjam Garg, et al.
5

Poisoning attacks have emerged as a significant security threat to machine learning (ML) algorithms. It has been demonstrated that adversaries who make small changes to the training set, such as adding specially crafted data points, can hurt the performance of the output model. Most of these attacks require the full knowledge of training data or the underlying data distribution. In this paper we study the power of oblivious adversaries who do not have any information about the training set. We show a separation between oblivious and full-information poisoning adversaries. Specifically, we construct a sparse linear regression problem for which LASSO estimator is robust against oblivious adversaries whose goal is to add a non-relevant features to the model with certain poisoning budget. On the other hand, non-oblivious adversaries, with the same budget, can craft poisoning examples based on the rest of the training data and successfully add non-relevant features to the model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2018

Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

While machine learning (ML) models are being increasingly trusted to mak...
research
03/19/2018

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

Attacks against machine learning systems represent a growing threat as h...
research
09/28/2018

Adversarial Attacks and Defences: A Survey

Deep learning has emerged as a strong and efficient framework that can b...
research
10/07/2021

Detecting adversaries in Crowdsourcing

Despite its successes in various machine learning and data science tasks...
research
09/04/2022

Data Provenance via Differential Auditing

Auditing Data Provenance (ADP), i.e., auditing if a certain piece of dat...
research
06/18/2021

Accumulative Poisoning Attacks on Real-time Data

Collecting training data from untrusted sources exposes machine learning...
research
10/07/2014

Defending Tor from Network Adversaries: A Case Study of Network Path Prediction

The Tor anonymity network has been shown vulnerable to traffic analysis ...

Please sign up or login with your details

Forgot password? Click here to reset