Adaptive Regret of Convex and Smooth Functions

04/26/2019
by   Moshe Y. Vardi, et al.
10

We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. Our regret bounds are comparable to existing results in the worst case, and become much tighter when the comparator has a small loss.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2020

Dynamic Regret of Convex and Smooth Functions

We investigate online convex optimization in non-stationary environments...
research
03/21/2020

A new regret analysis for Adam-type algorithms

In this paper, we focus on a theory-practice gap for Adam and its varian...
research
07/01/2022

Efficient Adaptive Regret Minimization

In online convex optimization the player aims to minimize her regret aga...
research
05/23/2018

Efficient online algorithms for fast-rate regret bounds under sparsity

We consider the online convex optimization problem. In the setting of ar...
research
05/23/2023

Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness

This work introduces the first small-loss and gradual-variation regret b...
research
11/26/2020

Regret Bounds for Adaptive Nonlinear Control

We study the problem of adaptively controlling a known discrete-time non...
research
02/15/2012

Mirror Descent Meets Fixed Share (and feels no regret)

Mirror descent with an entropic regularizer is known to achieve shifting...

Please sign up or login with your details

Forgot password? Click here to reset