Second-order Quantile Methods for Experts and Combinatorial Games

02/27/2015
by   Wouter M. Koolen, et al.
0

We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem. We study this question both in the setting of prediction with expert advice, and for more general combinatorial decision tasks. We are not satisfied with just guaranteeing minimax regret rates, but we want our algorithms to perform significantly better on easy data. Two popular ways to formalize such adaptivity are second-order regret bounds and quantile bounds. The underlying notions of 'easy data', which may be paraphrased as "the learning problem has small variance" and "multiple decisions are useful", are synergetic. But even though there are sophisticated algorithms that exploit one of the two, no existing algorithm is able to adapt to both. In this paper we outline a new method for obtaining such adaptive algorithms, based on a potential function that aggregates a range of learning rates (which are essential tuning parameters). By choosing the right prior we construct efficient algorithms and show that they reap both benefits by proving the first bounds that are both second-order and incorporate quantiles.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2014

A Second-order Bound with Excess Losses

We study online aggregation of the predictions of experts, and first sho...
research
09/05/2019

More Adaptive Algorithms for Tracking the Best Expert

In this paper, we consider the problem of prediction with expert advice ...
research
08/21/2015

Adaptive Online Learning

We propose a general framework for studying adaptive regret bounds in th...
research
09/02/2022

A PDE approach for regret bounds under partial monitoring

In this paper, we study a learning problem in which a forecaster only ob...
research
08/31/2017

Efficient tracking of a growing number of experts

We consider a variation on the problem of prediction with expert advice,...
research
05/08/2021

A Simple yet Universal Strategy for Online Convex Optimization

Recently, several universal methods have been proposed for online convex...
research
02/05/2020

Locally-Adaptive Nonparametric Online Learning

One of the main strengths of online algorithms is their ability to adapt...

Please sign up or login with your details

Forgot password? Click here to reset