Optimal Non-Asymptotic Lower Bound on the Minimax Regret of Learning with Expert Advice

11/06/2015
by   Francesco Orabona, et al.
0

We prove non-asymptotic lower bounds on the expectation of the maximum of d independent Gaussian variables and the expectation of the maximum of d independent symmetric random walks. Both lower bounds recover the optimal leading constant in the limit. A simple application of the lower bound for random walks is an (asymptotically optimal) non-asymptotic lower bound on the minimax regret of online learning with expert advice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/27/2015

Online Learning with Gaussian Payoffs and Side Observations

We consider a sequential learning problem with Gaussian payoffs and side...
research
05/24/2023

On the Minimax Regret for Online Learning with Feedback Graphs

In this work, we improve on the upper and lower bounds for the regret of...
research
02/09/2017

Efficient Policy Learning

We consider the problem of using observational data to learn treatment a...
research
02/23/2017

A minimax and asymptotically optimal algorithm for stochastic bandits

We propose the kl-UCB ++ algorithm for regret minimization in stochastic...
research
02/26/2018

Random Walks on Polytopes of Constant Corank

We show that the pivoting process associated with one line and n points ...
research
01/05/2022

Regret Lower Bounds for Learning Linear Quadratic Gaussian Systems

This paper presents local minimax regret lower bounds for adaptively con...
research
05/11/2016

Asymptotic sequential Rademacher complexity of a finite function class

For a finite function class we describe the large sample limit of the se...

Please sign up or login with your details

Forgot password? Click here to reset