Universal Regression with Adversarial Responses

03/09/2022
by   Moise Blanchard, et al.
15

We provide algorithms for regression with adversarial responses under large classes of non-i.i.d. instance sequences, on general separable metric spaces, with provably minimal assumptions. We also give characterizations of learnability in this regression context. We consider universal consistency which asks for strong consistency of a learner without restrictions on the value responses. Our analysis shows that such objective is achievable for a significantly larger class of instance sequences than stationary processes, and unveils a fundamental dichotomy between value spaces: whether finite-horizon mean-estimation is achievable or not. We further provide optimistically universal learning rules, i.e., such that if they fail to achieve universal consistency, any other algorithm will fail as well. For unbounded losses, we propose a mild integrability condition under which there exist algorithms for adversarial regression under large classes of non-i.i.d. instance sequences. In addition, our analysis also provides a learning rule for mean-estimation in general metric spaces that is consistent under adversarial responses without any moment conditions on the sequence, a result of independent interest.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/16/2022

Universal Online Learning: an Optimistically Universal Learning Rule

We study the subject of universal online learning with non-i.i.d. proces...
research
06/24/2019

Universal Bayes consistency in metric spaces

We show that a recently proposed 1-nearest-neighbor-based multiclass lea...
research
12/31/2022

Contextual Bandits and Optimistically Universal Learning

We consider the contextual bandit problem on general action and context ...
research
01/21/2022

Universal Online Learning with Unbounded Losses: Memory Is All You Need

We resolve an open problem of Hanneke on the subject of universally cons...
research
02/14/2023

Non-stationary Contextual Bandits and Universal Learning

We study the fundamental limits of learning in contextual bandits, where...
research
06/29/2023

Medoid splits for efficient random forests in metric spaces

This paper revisits an adaptation of the random forest algorithm for Fré...
research
06/05/2017

Learning Whenever Learning is Possible: Universal Learning under General Stochastic Processes

This work initiates a general study of learning and generalization witho...

Please sign up or login with your details

Forgot password? Click here to reset