No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

02/22/2022
by   Eric Neyman, et al.
1

For each of T time steps, m experts report probability distributions over n outcomes; we wish to learn to aggregate these forecasts in a way that attains a no-regret guarantee. We focus on the fundamental and practical aggregation method known as logarithmic pooling – a weighted average of log odds – which is in a certain sense the optimal choice of pooling method if one is interested in minimizing log loss (as we take to be our loss function). We consider the problem of learning the best set of parameters (i.e. expert weights) in an online adversarial setting. We assume (by necessity) that the adversarial choices of outcomes and forecasts are consistent, in the sense that experts report calibrated forecasts. Our main result is an algorithm based on online mirror descent that learns expert weights in a way that attains O(√(T)log T) expected regret as compared with the best weights in hindsight.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2021

From Proper Scoring Rules to Max-Min Optimal Forecast Aggregation

This paper forges a strong connection between two seemingly unrelated fo...
research
12/15/2019

Integral Mixabilty: a Tool for Efficient Online Aggregation of Functional and Probabilistic Forecasts

In this paper we extend the setting of the online prediction with expert...
research
06/06/2022

A Regret-Variance Trade-Off in Online Learning

We consider prediction with expert advice for strongly convex and bounde...
research
02/27/2019

Adaptive Hedging under Delayed Feedback

The article is devoted to investigating the application of hedging strat...
research
11/11/2019

Learning The Best Expert Efficiently

We consider online learning problems where the aim is to achieve regret ...
research
08/02/2018

Online Aggregation of Unbounded Losses Using Shifting Experts with Confidence

We develop the setting of sequential prediction based on shifting expert...
research
01/08/2019

Soft-Bayes: Prod for Mixtures of Experts with Log-Loss

We consider prediction with expert advice under the log-loss with the go...

Please sign up or login with your details

Forgot password? Click here to reset