On the Sample Complexity of Adversarial Multi-Source PAC Learning

02/24/2020
by   Nikola Konstantinov, et al.
0

We study the problem of learning from multiple untrusted data sources, a scenario of increasing practical relevance given the recent emergence of crowdsourcing and collaborative learning paradigms. Specifically, we analyze the situation in which a learning system obtains datasets from multiple sources, some of which might be biased or even adversarially perturbed. It is known that in the single-source case, an adversary with the power to corrupt a fixed fraction of the training data can prevent PAC-learnability, that is, even in the limit of infinitely much training data, no learning system can approach the optimal test error. In this work we show that, surprisingly, the same is not true in the multi-source setting, where the adversary can arbitrarily corrupt a fixed fraction of the data sources. Our main results are a generalization bound that provides finite-sample guarantees for this learning setting, as well as corresponding lower bounds. Besides establishing PAC-learnability our results also show that in a cooperative learning setting sharing data with other parties has provable benefits, even if some participants are malicious.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/22/2023

The Sample Complexity of Multi-Distribution Learning for VC Classes

Multi-distribution learning is a natural generalization of PAC learning ...
research
06/13/2019

Lower Bounds for Adversarially Robust PAC Learning

In this work, we initiate a formal study of probably approximately corre...
research
05/12/2022

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

A fundamental problem in adversarial machine learning is to quantify how...
research
05/22/2018

Improved Algorithms for Collaborative PAC Learning

We study a recent model of collaborative PAC learning where k players wi...
research
10/30/2015

Learning Adversary Behavior in Security Games: A PAC Model Perspective

Recent applications of Stackelberg Security Games (SSG), from wildlife c...
research
05/12/2018

Do Outliers Ruin Collaboration?

We consider the problem of learning a binary classifier from n different...
research
06/22/2021

FLEA: Provably Fair Multisource Learning from Unreliable Training Data

Fairness-aware learning aims at constructing classifiers that not only m...

Please sign up or login with your details

Forgot password? Click here to reset