Parameter-free Online Convex Optimization with Sub-Exponential Noise

02/05/2019
by   Kwang-Sung Jun, et al.
0

We consider the problem of unconstrained online convex optimization (OCO) with sub-exponential noise, a strictly more general problem than the standard OCO. In this setting, the learner receives a subgradient of the loss functions corrupted by sub-exponential noise and strives to achieve optimal regret guarantee, without knowledge of the competitor norm, i.e., in a parameter-free way. Recently, Cutkosky and Boahen (COLT 2017) proved that, given unbounded subgradients, it is impossible to guarantee a sublinear regret due to an exponential penalty. This paper shows that it is possible to go around the lower bound by allowing the observed subgradients to be unbounded via stochastic noise. However, the presence of unbounded noise in unconstrained OCO is challenging; existing algorithms do not provide near-optimal regret bounds or fail to have a guarantee. So, we design a novel parameter-free OCO algorithm for Banach space, which we call BANCO, via a reduction to betting on noisy coins. We show that BANCO achieves the optimal regret rate in our problem. Finally, we show the application of our results to obtain a parameter-free locally private stochastic subgradient descent algorithm, and the connection to the law of iterated logarithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2017

Online Convex Optimization with Unconstrained Domains and Losses

We propose an online convex optimization algorithm (RescaledExp) that ac...
research
10/25/2022

Parameter-free Regret in High Probability with Heavy Tails

We present new algorithms for online convex optimization over unbounded ...
research
02/26/2022

Parameter-free Mirror Descent

We develop a modified online mirror descent framework that is suitable f...
research
02/24/2019

Artificial Constraints and Lipschitz Hints for Unconstrained Online Learning

We provide algorithms that guarantee regret R_T(u)<Õ(Gu^3 + G(u+1)√(T)) ...
research
02/06/2020

No-Regret Prediction in Marginally Stable Systems

We consider the problem of online prediction in a marginally stable line...
research
01/19/2022

PDE-Based Optimal Strategy for Unconstrained Online Learning

Unconstrained Online Linear Optimization (OLO) is a practical problem se...
research
02/27/2020

Lipschitz and Comparator-Norm Adaptivity in Online Learning

We study Online Convex Optimization in the unbounded setting where neith...

Please sign up or login with your details

Forgot password? Click here to reset