Online Convex Optimization with Unconstrained Domains and Losses

03/07/2017
by   Ashok Cutkosky, et al.
0

We propose an online convex optimization algorithm (RescaledExp) that achieves optimal regret in the unconstrained setting without prior knowledge of any bounds on the loss functions. We prove a lower bound showing an exponential separation between the regret of existing algorithms that require a known bound on the loss functions and any algorithm that does not require such knowledge. RescaledExp matches this lower bound asymptotically in the number of iterations. RescaledExp is naturally hyperparameter-free and we demonstrate empirically that it matches prior optimization algorithms that require hyperparameter optimization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2019

Parameter-free Online Convex Optimization with Sub-Exponential Noise

We consider the problem of unconstrained online convex optimization (OCO...
research
02/24/2019

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) ...
research
02/10/2020

Adaptive Online Learning with Varying Norms

Given any increasing sequence of norms ·_0,...,·_T-1, we provide an onli...
research
03/20/2021

Projection-free Distributed Online Learning with Strongly Convex Losses

To efficiently solve distributed online learning problems with complicat...
research
11/22/2021

Dynamic Regret for Strongly Adaptive Methods and Optimality of Online KRR

We consider the framework of non-stationary Online Convex Optimization w...
research
07/29/2022

Improved Policy Optimization for Online Imitation Learning

We consider online imitation learning (OIL), where the task is to find a...
research
08/23/2017

Scale-invariant unconstrained online learning

We consider a variant of online convex optimization in which both the in...

Please sign up or login with your details

Forgot password? Click here to reset