Parameter-free Mirror Descent

02/26/2022
by   Andrew Jacobsen, et al.
0

We develop a modified online mirror descent framework that is suitable for building adaptive and parameter-free algorithms in unbounded domains. We leverage this technique to develop the first unconstrained online linear optimization algorithm achieving an optimal dynamic regret bound, and we further demonstrate that natural strategies based on Follow-the-Regularized-Leader are unable to achieve similar results. We also apply our mirror descent framework to build new parameter-free implicit updates, as well as a simplified and improved unconstrained scale-free algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2019

Parameter-free Online Convex Optimization with Sub-Exponential Noise

We consider the problem of unconstrained online convex optimization (OCO...
research
02/17/2018

Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

We introduce several new black-box reductions that significantly improve...
research
06/08/2023

Unconstrained Online Learning with Unbounded Losses

Algorithms for online learning typically require one or more boundedness...
research
12/29/2021

Isotuning With Applications To Scale-Free Online Learning

We extend and combine several tools of the literature to design fast, ad...
research
10/25/2022

Parameter-free Regret in High Probability with Heavy Tails

We present new algorithms for online convex optimization over unbounded ...
research
02/04/2022

Parameter-free Online Linear Optimization with Side Information via Universal Coin Betting

A class of parameter-free online linear optimization algorithms is propo...
research
01/31/2023

Unconstrained Dynamic Regret via Sparse Coding

Motivated by time series forecasting, we study Online Linear Optimizatio...

Please sign up or login with your details

Forgot password? Click here to reset