On Fundamental Limits of Robust Learning

03/30/2017
by   Sharky.TV, et al.
0

We consider the problems of robust PAC learning from distributed and streaming data, which may contain malicious errors and outliers, and analyze their fundamental complexity questions. In particular, we establish lower bounds on the communication complexity for distributed robust learning performed on multiple machines, and on the space complexity for robust learning from streaming data on a single machine. These results demonstrate that gaining robustness of learning algorithms is usually at the expense of increased complexities. As far as we know, this work gives the first complexity results for distributed and online robust PAC learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2020

Communication-Aware Collaborative Learning

Algorithms for noiseless collaborative PAC learning have been analyzed a...
research
09/03/2023

Streaming and Query Once Space Complexity of Longest Increasing Subsequence

Longest Increasing Subsequence (LIS) is a fundamental problem in combina...
research
10/22/2020

Reducing Adversarially Robust Learning to Non-Robust PAC Learning

We study the problem of reducing adversarially robust learning to standa...
research
01/01/2017

Outlier Robust Online Learning

We consider the problem of learning from noisy data in practical setting...
research
02/02/2023

Lower Bounds for Learning in Revealing POMDPs

This paper studies the fundamental limits of reinforcement learning (RL)...
research
01/12/2020

Fundamental Limits of Online Learning: An Entropic-Innovations Viewpoint

In this paper, we examine the fundamental performance limitations of onl...
research
11/16/2017

On Communication Complexity of Classification Problems

This work introduces a model of distributed learning in the spirit of Ya...

Please sign up or login with your details

Forgot password? Click here to reset