A Regret-Variance Trade-Off in Online Learning

06/06/2022
by   Dirk van der Hoeven, et al.
6

We consider prediction with expert advice for strongly convex and bounded losses, and investigate trade-offs between regret and "variance" (i.e., squared difference of learner's predictions and best expert predictions). With K experts, the Exponentially Weighted Average (EWA) algorithm is known to achieve O(log K) regret. We prove that a variant of EWA either achieves a negative regret (i.e., the algorithm outperforms the best expert), or guarantees a O(log K) bound on both variance and regret. Building on this result, we show several examples of how variance of predictions can be exploited in learning. In the online to batch analysis, we show that a large empirical variance allows to stop the online to batch conversion early and outperform the risk of the best predictor in the class. We also recover the optimal rate of model selection aggregation when we do not consider early stopping. In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance. In online selective sampling, we design an algorithm that samples less when the variance is large, while guaranteeing the optimal regret bound in expectation. In online learning with abstention, we use a similar term as the variance to derive the first high-probability O(log K) regret bound in this setting. Finally, we extend our results to the setting of online linear regression.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2021

Optimal Dynamic Regret in Exp-Concave Online Learning

We consider the problem of the Zinkevich (2003)-style dynamic regret min...
research
10/27/2020

Online Learning with Primary and Secondary Losses

We study the problem of online learning with primary and secondary losse...
research
02/16/2023

Adaptive Selective Sampling for Online Prediction with Experts

We consider online prediction of a binary sequence with expert advice. F...
research
02/22/2022

No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

For each of T time steps, m experts report probability distributions ove...
research
07/03/2023

Trading-Off Payments and Accuracy in Online Classification with Paid Stochastic Experts

We investigate online classification with paid stochastic experts. Here,...
research
04/04/2014

Optimal learning with Bernstein Online Aggregation

We introduce a new recursive aggregation procedure called Bernstein Onli...
research
03/30/2022

Spatially Adaptive Online Prediction of Piecewise Regular Functions

We consider the problem of estimating piecewise regular functions in an ...

Please sign up or login with your details

Forgot password? Click here to reset