Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension

02/27/2023
by   Yuval Filmus, et al.
0

A classical result in online learning characterizes the optimal mistake bound achievable by deterministic learners using the Littlestone dimension (Littlestone '88). We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class ℋ equals its randomized Littlestone dimension, which is the largest d for which there exists a tree shattered by ℋ whose average depth is 2d. We further study optimal mistake bounds in the agnostic case, as a function of the number of mistakes made by the best function in ℋ, denoted by k. We show that the optimal randomized mistake bound for learning a class with Littlestone dimension d is k + Θ (√(k d) + d ). This also implies an optimal deterministic mistake bound of 2k + O (√(k d) + d ), thus resolving an open question which was studied by Auer and Long ['99]. As an application of our theory, we revisit the classical problem of prediction using expert advice: about 30 years ago Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth studied prediction using expert advice, provided that the best among the n experts makes at most k mistakes, and asked what are the optimal mistake bounds. Cesa-Bianchi, Freund, Helmbold, and Warmuth ['93, '96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this question by providing an optimal learning rule in the randomized case, and showing that its expected mistake bound equals half of the deterministic bound, up to negligible additive terms. This improves upon previous works by Cesa-Bianchi, Freund, Haussler, Helmbold, Schapire and Warmuth ['93, '97], by Abernethy, Langford, and Warmuth ['06], and by Brânzei and Peres ['19], which handled the regimes k ≪log n or k ≫log n.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/30/2018

Online Learning with an Almost Perfect Expert

We study the online learning problem where a forecaster makes a sequence...
research
03/03/2014

Cascading Randomized Weighted Majority: A New Online Ensemble Learning Algorithm

With the increasing volume of data in the world, the best approach for l...
research
03/03/2023

Streaming Algorithms for Learning with Experts: Deterministic Versus Robust

In the online learning with experts problem, an algorithm must make a pr...
research
07/20/2018

Optimal Bounds on the VC-dimension

The VC-dimension of a set system is a way to capture its complexity and ...
research
06/27/2023

Randomized vs. Deterministic Separation in Time-Space Tradeoffs of Multi-Output Functions

We prove the first polynomial separation between randomized and determin...
research
12/13/1999

New Error Bounds for Solomonoff Prediction

Solomonoff sequence prediction is a scheme to predict digits of binary s...
research
12/02/2013

Consistency of weighted majority votes

We revisit the classical decision-theoretic problem of weighted expert v...

Please sign up or login with your details

Forgot password? Click here to reset