Generalized Mixability Constant Regret, Generalized Mixability, and Mirror Descent

02/20/2018
by   Zakaria Mhammedi, et al.
0

We consider the setting of prediction with expert advice; a learner makes predictions by aggregating those of a group of experts. Under this setting, and with the right choice of loss function and "mixing" algorithm, between the cumulative loss of it is possible for the learner to achieve constant regret regardless of the number of prediction rounds. For example, constant regret can be achieved with mixable losses using the Aggregating Algorithm (AA). The Generalized Aggregating Algorithm (GAA) is a name for a family of algorithms parameterized by convex functions on simplices (entropies), which reduce to the AA when using the Shannon entropy. For a given entropy Φ, losses for which constant regret is possible using the GAA are called Φ-mixable. Which losses are Φ-mixable was previously left as an open question. We fully characterize Φ-mixability, and answer other open questions posed by [Reid2015]. We also elaborate on the tight link between the GAA and the mirror descent algorithm which minimizes the weighted loss of experts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset