An adaptive stochastic gradient-free approach for high-dimensional blackbox optimization

06/18/2020
by   Anton Dereventsov, et al.
0

In this work, we propose a novel adaptive stochastic gradient-free (ASGF) approach for solving high-dimensional nonconvex optimization problems based on function evaluations. We employ a directional Gaussian smoothing of the target function that generates a surrogate of the gradient and assists in avoiding bad local optima by utilizing nonlocal information of the loss landscape. Applying a deterministic quadrature scheme results in a massively scalable technique that is sample-efficient and achieves spectral accuracy. At each step we randomly generate the search directions while primarily following the surrogate of the smoothed gradient. This enables exploitation of the gradient direction while maintaining sufficient space exploration, and accelerates convergence towards the global extrema. In addition, we make use of a local approximation of the Lipschitz constant in order to adaptively adjust the values of all hyperparameters, thus removing the careful fine-tuning of current algorithms that is often necessary to be successful when applied to a large class of learning tasks. As such, the ASGF strategy offers significant improvements when solving high-dimensional nonconvex optimization problems when compared to other gradient-free methods (including the so called "evolutionary strategies") as well as iterative approaches that rely on the gradient information of the objective function. We illustrate the improved performance of this method by providing several comparative numerical studies on benchmark global optimization problems and reinforcement learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2023

A Stochastic-Gradient-based Interior-Point Algorithm for Solving Smooth Bound-Constrained Optimization Problems

A stochastic-gradient-based interior-point algorithm for minimizing a co...
research
06/04/2018

Challenges in High-dimensional Reinforcement Learning with Evolution Strategies

Evolution Strategies (ESs) have recently become popular for training dee...
research
06/26/2018

Guided evolutionary strategies: escaping the curse of dimensionality in random search

Many applications in machine learning require optimizing a function whos...
research
11/03/2020

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

The local gradient points to the direction of the steepest slope in an i...
research
10/11/2019

Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions

Evolutionary Strategies (ES) are known to be an effective black-box opti...
research
02/07/2020

A Scalable Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

We developed a new scalable evolution strategy with directional Gaussian...
research
01/22/2018

Rover Descent: Learning to optimize by learning to navigate on prototypical loss surfaces

Learning to optimize - the idea that we can learn from data algorithms t...

Please sign up or login with your details

Forgot password? Click here to reset