Online Improper Learning with an Approximation Oracle

04/20/2018
by   Elad Hazan, et al.
0

We revisit the question of reducing online learning to approximate optimization of the offline problem. In this setting, we give two algorithms with near-optimal performance in the full information setting: they guarantee optimal regret and require only poly-logarithmically many calls to the approximation oracle per iteration. Furthermore, these algorithms apply to the more general improper learning problems. In the bandit setting, our algorithm also significantly improves the best previously known oracle complexity while maintaining the same regret.

READ FULL TEXT
research
02/27/2023

Near-Optimal Algorithms for Private Online Optimization in the Realizable Regime

We consider online learning problems in the realizable setting, where th...
research
02/09/2022

Smoothed Online Learning is as Easy as Statistical Learning

Much of modern learning theory has been split between two regimes: the c...
research
06/26/2021

Contextual Inverse Optimization: Offline and Online Learning

We study the problems of offline and online contextual optimization with...
research
07/04/2023

Online Learning and Solving Infinite Games with an ERM Oracle

While ERM suffices to attain near-optimal generalization error in the st...
research
02/18/2021

Online Learning via Offline Greedy Algorithms: Applications in Market Design and Optimization

Motivated by online decision-making in time-varying combinatorial enviro...
research
01/26/2023

Smoothed Online Learning for Prediction in Piecewise Affine Systems

The problem of piecewise affine (PWA) regression and planning is of foun...
research
06/09/2021

ChaCha for Online AutoML

We propose the ChaCha (Champion-Challengers) algorithm for making an onl...

Please sign up or login with your details

Forgot password? Click here to reset