Fast rates for prediction with limited expert advice

10/27/2021
by   El Mehdi Saad, et al.
0

We investigate the problem of minimizing the excess generalization error with respect to the best expert prediction in a finite family in the stochastic setting, under limited access to information. We assume that the learner only has access to a limited number of expert advices per training round, as well as for prediction. Assuming that the loss function is Lipschitz and strongly convex, we show that if we are allowed to see the advice of only one expert per round for T rounds in the training phase, or to use the advice of only one expert for prediction in the test phase, the worst-case excess risk is Ω(1/ √($) T) with probability lower bounded by a constant. However, if we are allowed to see at least two actively chosen expert advices per training round and use at least two experts for prediction, the fast rate O(1/T) can be achieved. We design novel algorithms achieving this rate in this setting, and in the setting where the learner has a budget constraint on the total number of observed expert advices, and give precise instance-dependent bounds on the number of training rounds and queries needed to achieve a given generalization error precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2022

Constant regret for sequence prediction with limited advice

We investigate the problem of cumulative regret minimization for individ...
research
04/12/2013

Advice-Efficient Prediction with Expert Advice

Advice-efficient prediction with expert advice (in analogy to label-effi...
research
06/05/2023

Active Ranking of Experts Based on their Performances in Many Tasks

We consider the problem of ranking n experts based on their performances...
research
07/03/2023

Trading-Off Payments and Accuracy in Online Classification with Paid Stochastic Experts

We investigate online classification with paid stochastic experts. Here,...
research
02/20/2018

Constant Regret, Generalized Mixability, and Mirror Descent

We consider the setting of prediction with expert advice; a learner make...
research
04/09/2018

Contextual Search via Intrinsic Volumes

We study the problem of contextual search, a multidimensional generaliza...
research
02/01/2021

Impossible Tuning Made Possible: A New Expert Algorithm and Its Applications

We resolve the long-standing "impossible tuning" issue for the classic e...

Please sign up or login with your details

Forgot password? Click here to reset