Unconstrained Online Learning with Unbounded Losses

06/08/2023
by   Andrew Jacobsen, et al.
0

Algorithms for online learning typically require one or more boundedness assumptions: that the domain is bounded, that the losses are Lipschitz, or both. In this paper, we develop a new setting for online learning with unbounded domains and non-Lipschitz losses. For this setting we provide an algorithm which guarantees R_T(u)≤Õ(Gu√(T)+Lu^2√(T)) regret on any problem where the subgradients satisfy g_t≤ G+Lw_t, and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound. The regret of our dynamic regret algorithm automatically improves to a novel L^* bound when the losses are smooth.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2019

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) ...
research
02/27/2017

Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning

We investigate contextual online learning with nonparametric (Lipschitz)...
research
09/07/2020

Non-exponentially weighted aggregation: regret bounds for unbounded loss functions

We tackle the problem of online optimization with a general, possibly un...
research
12/29/2021

Isotuning With Applications To Scale-Free Online Learning

We extend and combine several tools of the literature to design fast, ad...
research
01/19/2022

PDE-Based Optimal Strategy for Unconstrained Online Learning

Unconstrained Online Linear Optimization (OLO) is a practical problem se...
research
10/25/2022

Parameter-free Regret in High Probability with Heavy Tails

We present new algorithms for online convex optimization over unbounded ...
research
02/26/2022

Parameter-free Mirror Descent

We develop a modified online mirror descent framework that is suitable f...

Please sign up or login with your details

Forgot password? Click here to reset