Sequential prediction under log-loss and misspecification

01/29/2021
by   Meir Feder, et al.
0

We consider the question of sequential prediction under the log-loss in terms of cumulative regret. Namely, given a hypothesis class of distributions, learner sequentially predicts the (distribution of the) next letter in sequence and its performance is compared to the baseline of the best constant predictor from the hypothesis class. The well-specified case corresponds to an additional assumption that the data-generating distribution belongs to the hypothesis class as well. Here we present results in the more general misspecified case. Due to special properties of the log-loss, the same problem arises in the context of competitive-optimality in density estimation, and model selection. For the d-dimensional Gaussian location hypothesis class, we show that cumulative regrets in the well-specified and misspecified cases asymptotically coincide. In other words, we provide an o(1) characterization of the distribution-free (or PAC) regret in this case – the first such result as far as we know. We recall that the worst-case (or individual-sequence) regret in this case is larger by an additive constant d 2 + o(1). Surprisingly, neither the traditional Bayesian estimators, nor the Shtarkov's normalized maximum likelihood achieve the PAC regret and our estimator requires special "robustification" against heavy-tailed data. In addition, we show two general results for misspecified regret: the existence and uniqueness of the optimal estimator, and the bound sandwiching the misspecified regret between well-specified regrets with (asymptotically) close hypotheses classes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2020

Contextual Search for General Hypothesis Classes

We study a general version of the problem of online learning under binar...
research
01/13/2021

On Misspecification in Prediction Problems and Robustness via Improper Learning

We study probabilistic prediction games when the underlying model is mis...
research
10/05/2022

Constant regret for sequence prediction with limited advice

We investigate the problem of cumulative regret minimization for individ...
research
06/25/2021

Littlestone Classes are Privately Online Learnable

We consider the problem of online classification under a privacy constra...
research
10/21/2017

A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity

We present a novel notion of complexity that interpolates between and ge...
research
09/08/2021

Sharp regret bounds for empirical Bayes and compound decision problems

We consider the classical problems of estimating the mean of an n-dimens...
research
07/06/2022

Model Selection in Reinforcement Learning with General Function Approximations

We consider model selection for classic Reinforcement Learning (RL) envi...

Please sign up or login with your details

Forgot password? Click here to reset