Gradientless Descent: High-Dimensional Zeroth-Order Optimization

11/14/2019
by   Daniel Golovin, et al.
0

Zeroth-order optimization is the process of minimizing an objective f(x), given oracle access to evaluations at adaptively chosen inputs x. In this paper, we present two simple yet powerful GradientLess Descent (GLD) algorithms that do not rely on an underlying gradient estimate and are numerically stable. We analyze our algorithm from a novel geometric perspective and present a novel analysis that shows convergence within an ϵ-ball of the optimum in O(kQlog(n)log(R/ϵ)) evaluations, for any monotone transform of a smooth and strongly convex objective with latent dimension k < n, where the input dimension is n, R is the diameter of the input space and Q is the condition number. Our rates are the first of its kind to be both 1) poly-logarithmically dependent on dimensionality and 2) invariant under monotone transformations. We further leverage our geometric perspective to show that our analysis is optimal. Both monotone invariance and its ability to utilize a low latent dimensionality are key to the empirical success of our algorithms, as demonstrated on BBOB and MuJoCo benchmarks.

READ FULL TEXT
research
11/30/2021

Survey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization

For strongly convex objectives that are smooth, the classical theory of ...
research
04/01/2022

Self-adjusting Population Sizes for the (1, λ)-EA on Monotone Functions

We study the (1,λ)-EA with mutation rate c/n for c≤ 1, where the populat...
research
05/12/2022

Optimal Methods for Higher-Order Smooth Monotone Variational Inequalities

In this work, we present new simple and optimal algorithms for solving t...
research
03/29/2020

Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling

We consider the problem of minimizing a high-dimensional objective funct...
research
07/10/2023

An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization

We study the complexity of producing (δ,ϵ)-stationary points of Lipschit...
research
03/09/2022

Resource-Efficient Invariant Networks: Exponential Gains by Unrolled Optimization

Achieving invariance to nuisance transformations is a fundamental challe...
research
11/18/2021

Improved rates for derivative free play in convex games

The influential work of Bravo et al. 2018 shows that derivative free pla...

Please sign up or login with your details

Forgot password? Click here to reset