Multi-party Poisoning through Generalized p-Tampering

09/10/2018
by   Saeed Mahloujifar, et al.
0

In a poisoning attack against a learning algorithm, an adversary tampers with a fraction of the training data T with the goal of increasing the classification error of the constructed hypothesis/model over the final test distribution. In the distributed setting, T might be gathered gradually from m data providers P_1,...,P_m who generate and submit their shares of T in an online way. In this work, we initiate a formal study of (k,p)-poisoning attacks in which an adversary controls k∈[n] of the parties, and even for each corrupted party P_i, the adversary submits some poisoned data T'_i on behalf of P_i that is still "(1-p)-close" to the correct data T_i (e.g., 1-p fraction of T'_i is still honestly generated). For k=m, this model becomes the traditional notion of poisoning, and for p=1 it coincides with the standard notion of corruption in multi-party computation. We prove that if there is an initial constant error for the generated hypothesis h, there is always a (k,p)-poisoning attacker who can decrease the confidence of h (to have a small error), or alternatively increase the error of h, by Ω(p · k/m). Our attacks can be implemented in polynomial time given samples from the correct data, and they use no wrong labels if the original distributions are not noisy. At a technical level, we prove a general lemma about biasing bounded functions f(x_1,...,x_n)∈[0,1] through an attack model in which each block x_i might be controlled by an adversary with marginal probability p in an online way. When the probabilities are independent, this coincides with the model of p-tampering attacks, thus we call our model generalized p-tampering. We prove the power of such attacks by incorporating ideas from the context of coin-flipping attacks into the p-tampering model and generalize the results in both of these areas.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2017

Learning under p-Tampering Attacks

Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algor...
research
01/26/2021

Property Inference From Poisoning

Property inference attacks consider an adversary who has access to the t...
research
10/02/2018

Can Adversarially Robust Learning Leverage Computational Hardness?

Making learners robust to adversarial perturbation at test time (i.e., e...
research
11/07/2021

Interactive Error Correcting Codes Over Binary Erasure Channels Resilient to >1/2 Adversarial Corruption

An error correcting code (𝖤𝖢𝖢) allows a sender to send a message to a re...
research
08/27/2018

Data Poisoning Attacks against Online Learning

We consider data poisoning attacks, a class of adversarial attacks on ma...
research
03/08/2022

Robustly-reliable learners under poisoning attacks

Data poisoning attacks, in which an adversary corrupts a training set wi...
research
03/27/2017

Adversarial Source Identification Game with Corrupted Training

We study a variant of the source identification game with training data ...

Please sign up or login with your details

Forgot password? Click here to reset