Learning under p-Tampering Attacks

11/10/2017
by   Saeed Mahloujifar, et al.
0

Mahloujifar and Mahmoody (TCC'17) studied attacks against learning algorithms using a special case of Valiant's malicious noise, called p-tampering, in which the adversary could change training examples with independent probability p but only using correct labels. They showed the power of such attacks by increasing the error probability in the so called `targeted' poisoning model in which the adversary's goal is to increase the loss of the generated hypothesis over a particular test example. At the heart of their attack was an efficient algorithm to bias the average output of any bounded real-valued function through p-tampering. In this work, we present new attacks for biasing the average output of bounded real-valued functions, improving upon the biasing attacks of MM16. Our improved biasing attacks, directly imply improved p-tampering attacks against learners in the targeted poisoning model. As a bonus, our attacks come with considerably simpler analysis compared to previous attacks. We also study the possibility of PAC learning under p-tampering attacks in the non-targeted (aka indiscriminate) setting where the adversary's goal is to increase the risk of the generated hypothesis (for a random test example). We show that PAC learning is possible under p-tampering poisoning attacks essentially whenever it is possible in the realizable setting without the attacks. We further show that PAC learning under `no-mistake' adversarial noise is not possible, if the adversary could choose the (still limited to only p fraction of) tampered examples that she substitutes with adversarially chosen ones. Our formal model for such `bounded-budget' tampering attackers is inspired by the notions of (strong) adaptive corruption in secure multi-party computation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/10/2018

Multi-party Poisoning through Generalized p-Tampering

In a poisoning attack against a learning algorithm, an adversary tampers...
research
03/08/2022

Robustly-reliable learners under poisoning attacks

Data poisoning attacks, in which an adversary corrupts a training set wi...
research
05/18/2021

Learning and Certification under Instance-targeted Poisoning

In this paper, we study PAC learnability and certification under instanc...
research
10/30/2015

Learning Adversary Behavior in Security Games: A PAC Model Perspective

Recent applications of Stackelberg Security Games (SSG), from wildlife c...
research
09/09/2018

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Many modern machine learning classifiers are shown to be vulnerable to a...
research
06/05/2018

PAC-learning in the presence of evasion adversaries

The existence of evasion attacks during the test phase of machine learni...
research
06/25/2022

Cascading Failures in Smart Grids under Random, Targeted and Adaptive Attacks

We study cascading failures in smart grids, where an attacker selectivel...

Please sign up or login with your details

Forgot password? Click here to reset