Generalised Mixability, Constant Regret, and Bayesian Updating

03/10/2014
by   Mark D. Reid, et al.
0

Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of Vovk's aggregating algorithm. We provide a new interpretation of mixability via convex analysis that highlights the role of the Kullback-Leibler divergence in its definition. This naturally generalises to what we call Φ-mixability where the Bregman divergence D_Φ replaces the KL divergence. We prove that losses that are Φ-mixable also enjoy constant regret bounds via a generalised aggregating algorithm that is similar to mirror descent.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2018

Constant Regret, Generalized Mixability, and Mirror Descent

We consider the setting of prediction with expert advice; a learner make...
research
02/20/2018

Generalized Mixability Constant Regret, Generalized Mixability, and Mirror Descent

We consider the setting of prediction with expert advice; a learner make...
research
05/20/2018

Exp-Concavity of Proper Composite Losses

The goal of online prediction with expert advice is to find a decision s...
research
10/14/2018

Bregman Divergence Bounds and the Universality of the Logarithmic Loss

A loss function measures the discrepancy between the true values and the...
research
12/29/2021

Isotuning With Applications To Scale-Free Online Learning

We extend and combine several tools of the literature to design fast, ad...
research
05/17/2020

On loss functions and regret bounds for multi-category classification

We develop new approaches in multi-class settings for constructing prope...
research
06/29/2013

Concentration and Confidence for Discrete Bayesian Sequence Predictors

Bayesian sequence prediction is a simple technique for predicting future...

Please sign up or login with your details

Forgot password? Click here to reset