Lazy Queries Can Reduce Variance in Zeroth-order Optimization

06/14/2022
by   Quan Xiao, et al.
9

A major challenge of applying zeroth-order (ZO) methods is the high query complexity, especially when queries are costly. We propose a novel gradient estimation technique for ZO methods based on adaptive lazy queries that we term as LAZO. Different from the classic one-point or two-point gradient estimation methods, LAZO develops two alternative ways to check the usefulness of old queries from previous iterations, and then adaptively reuses them to construct the low-variance gradient estimates. We rigorously establish that through judiciously reusing the old queries, LAZO can reduce the variance of stochastic gradient estimates so that it not only saves queries per iteration but also achieves the regret bound for the symmetric two-point method. We evaluate the numerical performance of LAZO, and demonstrate the low-variance property and the performance gain of LAZO in both regret and query complexity relative to several existing ZO methods. The idea of LAZO is general, and can be applied to other variants of ZO methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2019

Lower Bounds for Non-Convex Stochastic Optimization

We lower bound the complexity of finding ϵ-stationary points (with gradi...
research
06/20/2018

Stochastic Nested Variance Reduction for Nonconvex Optimization

We study finite-sum nonconvex optimization problems, where the objective...
research
10/10/2016

Stochastic Alternating Direction Method of Multipliers with Variance Reduction for Nonconvex Optimization

In the paper, we study the stochastic alternating direction method of mu...
research
10/13/2020

Regret minimization in stochastic non-convex learning via a proximal-gradient approach

Motivated by applications in machine learning and operations research, w...
research
06/18/2020

Improving the Convergence Rate of One-Point Zeroth-Order Optimization using Residual Feedback

Many existing zeroth-order optimization (ZO) algorithms adopt two-point ...
research
10/13/2019

AdaWISH: Faster Discrete Integration via Adaptive Quantiles

Discrete integration in a high dimensional space of n variables poses fu...
research
03/16/2022

Risk-Averse No-Regret Learning in Online Convex Games

We consider an online stochastic game with risk-averse agents whose goal...

Please sign up or login with your details

Forgot password? Click here to reset