Boosting One-Point Derivative-Free Online Optimization via Residual Feedback

10/14/2020
by   Yan Zhang, et al.
10

Zeroth-order optimization (ZO) typically relies on two-point feedback to estimate the unknown gradient of the objective function. Nevertheless, two-point feedback can not be used for online optimization of time-varying objective functions, where only a single query of the function value is possible at each time step. In this work, we propose a new one-point feedback method for online optimization that estimates the objective function gradient using the residual between two feedback points at consecutive time instants. Moreover, we develop regret bounds for ZO with residual feedback for both convex and nonconvex online optimization problems. Specifically, for both deterministic and stochastic problems and for both Lipschitz and smooth objective functions, we show that using residual feedback can produce gradient estimates with much smaller variance compared to conventional one-point feedback methods. As a result, our regret bounds are much tighter compared to existing regret bounds for ZO with conventional one-point feedback, which suggests that ZO with residual feedback can better track the optimizer of online optimization problems. Additionally, our regret bounds rely on weaker assumptions than those used in conventional one-point feedback methods. Numerical experiments show that ZO with residual feedback significantly outperforms existing one-point feedback methods also in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2020

Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback

Many existing zeroth-order optimization (ZO) algorithms adopt two-point ...
research
07/23/2020

Online Boosting with Bandit Feedback

We consider the problem of online boosting for regression tasks, when on...
research
12/03/2019

Online and Bandit Algorithms for Nonstationary Stochastic Saddle-Point Optimization

Saddle-point optimization problems are an important class of optimizatio...
research
05/16/2016

Tracking Slowly Moving Clairvoyant: Optimal Dynamic Regret of Online Learning with True and Noisy Gradient

This work focuses on dynamic regret of online convex optimization that c...
research
08/07/2023

Non-Convex Bilevel Optimization with Time-Varying Objective Functions

Bilevel optimization has become a powerful tool in a wide variety of mac...
research
03/16/2022

Risk-Averse No-Regret Learning in Online Convex Games

We consider an online stochastic game with risk-averse agents whose goal...
research
07/14/2021

A Granular Sieving Algorithm for Deterministic Global Optimization

A gradient-free deterministic method is developed to solve global optimi...

Please sign up or login with your details

Forgot password? Click here to reset