Generalized Mixability Constant Regret, Generalized Mixability, and Mirror Descent

02/20/2018
by   Zakaria Mhammedi, et al.
0

We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and with the right choice of loss function and "mixing" algorithm, between the cumulative loss of it is possible for the learner to achieve constant regret regardless of the number of prediction rounds. For example, constant regret can be achieved with mixable losses using the Aggregating Algorithm (AA). The Generalized Aggregating Algorithm (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the AA when using the Shannon entropy. For a given entropy Φ, losses for which constant regret is possible using the GAA are called Φ-mixable. Which losses are Φ-mixable was previously left as an open question. We fully characterize Φ-mixability, and answer other open questions posed by [Reid2015]. We also elaborate on the tight link between the GAA and the mirror descent algorithm which minimizes the weighted loss of experts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2018

Constant Regret, Generalized Mixability, and Mirror Descent

We consider the setting of prediction with expert advice; a learner make...
research
03/10/2014

Generalised Mixability, Constant Regret, and Bayesian Updating

Mixability of a loss is known to characterise when constant regret bound...
research
05/20/2018

Exp-Concavity of Proper Composite Losses

The goal of online prediction with expert advice is to find a decision s...
research
02/24/2020

Prediction with Corrupted Expert Advice

We revisit the fundamental problem of prediction with expert advice, in ...
research
05/24/2023

No-Regret Online Prediction with Strategic Experts

We study a generalization of the online binary prediction with expert ad...
research
09/09/2020

A Generalized Online Algorithm for Translation and Scale Invariant Prediction with Expert Advice

In this work, we aim to create a completely online algorithmic framework...
research
10/27/2020

Online Learning with Primary and Secondary Losses

We study the problem of online learning with primary and secondary losse...

Please sign up or login with your details

Forgot password? Click here to reset