Optimal learning with Bernstein Online Aggregation

04/04/2014
by   Olivier Wintenberger, et al.
0

We introduce a new recursive aggregation procedure called Bernstein Online Aggregation (BOA). The exponential weights include an accuracy term and a second order term that is a proxy of the quadratic variation as in Hazan and Kale (2010). This second term stabilizes the procedure that is optimal in different senses. We first obtain optimal regret bounds in the deterministic context. Then, an adaptive version is the first exponential weights algorithm that exhibits a second order bound with excess losses that appears first in Gaillard et al. (2014). The second order bounds in the deterministic context are extended to a general stochastic context using the cumulative predictive risk. Such conversion provides the main result of the paper, an inequality of a novel type comparing the procedure with any deterministic aggregation procedure for an integrated criteria. Then we obtain an observable estimate of the excess of risk of the BOA procedure. To assert the optimality, we consider finally the iid case for strongly convex and Lipschitz continuous losses and we prove that the optimal rate of aggregation of Tsybakov (2003) is achieved. The batch version of the BOA procedure is then the first adaptive explicit algorithm that satisfies an optimal oracle inequality with high probability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2014

A Second-order Bound with Excess Losses

We study online aggregation of the predictions of experts, and first sho...
research
03/22/2021

Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)

The sharpest known high probability generalization bounds for uniformly ...
research
09/07/2020

Non-exponentially weighted aggregation: regret bounds for unbounded loss functions

We tackle the problem of online optimization with a general, possibly un...
research
06/06/2022

A Regret-Variance Trade-Off in Online Learning

We consider prediction with expert advice for strongly convex and bounde...
research
03/30/2022

Spatially Adaptive Online Prediction of Piecewise Regular Functions

We consider the problem of estimating piecewise regular functions in an ...
research
02/21/2018

The Many Faces of Exponential Weights in Online Learning

A standard introduction to online learning might place Online Gradient D...
research
05/13/2013

Boosting with the Logistic Loss is Consistent

This manuscript provides optimization guarantees, generalization bounds,...

Please sign up or login with your details

Forgot password? Click here to reset