Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient

05/16/2016
by   Tianbao Yang, et al.
0

This work focuses on dynamic regret of online convex optimization that compares the performance of online learning to a clairvoyant who knows the sequence of loss functions in advance and hence selects the minimizer of the loss function at each step. By assuming that the clairvoyant moves slowly (i.e., the minimizers change slowly), we present several improved variation-based upper bounds of the dynamic regret under the true and noisy gradient feedback, which are optimal in light of the presented lower bounds. The key to our analysis is to explore a regularity metric that measures the temporal changes in the clairvoyant's minimizers, to which we refer as path variation. Firstly, we present a general lower bound in terms of the path variation, and then show that under full information or gradient feedback we are able to achieve an optimal dynamic regret. Secondly, we present a lower bound with noisy gradient feedback and then show that we can achieve optimal dynamic regrets under a stochastic gradient feedback and two-point bandit feedback. Moreover, for a sequence of smooth loss functions that admit a small variation in the gradients, our dynamic regret under the two-point bandit feedback matches what is achieved with full information.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2020

Unconstrained Online Optimization: Dynamic Regret Analysis of Strongly Convex and Smooth Problems

The regret bound of dynamic online learning algorithms is often expresse...
research
12/29/2021

Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization

We investigate online convex optimization in non-stationary environments...
research
08/13/2021

Optimal and Efficient Algorithms for General Mixable Losses against Switching Oracles

We investigate the problem of online learning, which has gained signific...
research
09/22/2016

(Bandit) Convex Optimization with Biased Noisy Gradient Oracles

Algorithms for bandit convex optimization and online learning often rely...
research
10/14/2020

Boosting One-Point Derivative-Free Online Optimization via Residual Feedback

Zeroth-order optimization (ZO) typically relies on two-point feedback to...
research
03/16/2021

Taming Wild Price Fluctuations: Monotone Stochastic Convex Optimization with Bandit Feedback

Prices generated by automated price experimentation algorithms often dis...
research
10/08/2018

Proximal Online Gradient is Optimum for Dynamic Regret

In online learning, the dynamic regret metric chooses the reference (opt...

Please sign up or login with your details

Forgot password? Click here to reset