DeepAI
Log In Sign Up

An adaptive stochastic gradient-free approach for high-dimensional blackbox optimization

06/18/2020
by   Anton Dereventsov, et al.
0

In this work, we propose a novel adaptive stochastic gradient-free (ASGF) approach for solving high-dimensional nonconvex optimization problems based on function evaluations. We employ a directional Gaussian smoothing of the target function that generates a surrogate of the gradient and assists in avoiding bad local optima by utilizing nonlocal information of the loss landscape. Applying a deterministic quadrature scheme results in a massively scalable technique that is sample-efficient and achieves spectral accuracy. At each step we randomly generate the search directions while primarily following the surrogate of the smoothed gradient. This enables exploitation of the gradient direction while maintaining sufficient space exploration, and accelerates convergence towards the global extrema. In addition, we make use of a local approximation of the Lipschitz constant in order to adaptively adjust the values of all hyperparameters, thus removing the careful fine-tuning of current algorithms that is often necessary to be successful when applied to a large class of learning tasks. As such, the ASGF strategy offers significant improvements when solving high-dimensional nonconvex optimization problems when compared to other gradient-free methods (including the so called "evolutionary strategies") as well as iterative approaches that rely on the gradient information of the objective function. We illustrate the improved performance of this method by providing several comparative numerical studies on benchmark global optimization problems and reinforcement learning tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/29/2019

Linear interpolation gives better gradients than Gaussian smoothing in derivative-free optimization

In this paper, we consider derivative free optimization problems, where ...
11/03/2020

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

The local gradient points to the direction of the steepest slope in an i...
06/04/2018

Challenges in High-dimensional Reinforcement Learning with Evolution Strategies

Evolution Strategies (ESs) have recently become popular for training dee...
02/07/2020

A Scalable Evolution Strategy with Directional Gaussian Smoothing for Blackbox Optimization

We developed a new scalable evolution strategy with directional Gaussian...
10/11/2019

Improving Gradient Estimation in Evolutionary Strategies With Past Descent Directions

Evolutionary Strategies (ES) are known to be an effective black-box opti...
06/26/2018

Guided evolutionary strategies: escaping the curse of dimensionality in random search

Many applications in machine learning require optimizing a function whos...
10/13/2021

Seismic Tomography with Random Batch Gradient Reconstruction

Seismic tomography solves high-dimensional optimization problems to imag...