Stochastic Implicit Natural Gradient for Black-box Optimization

10/09/2019
by   Yueming Lyu, et al.
0

Black-box optimization is primarily important for many compute-intensive applications, including reinforcement learning (RL), robot control, etc. This paper presents a novel theoretical framework for black-box optimization, in which our method performs stochastic update within a trust region defined with KL-divergence. We show that this update is equivalent to a natural gradient step w.r.t. natural parameters of an exponential-family distribution. Theoretically, we prove the convergence rate of our framework for convex functions. Our theoretical results also hold for non-differentiable black-box functions. Empirically, our method achieves superior performance compared with the state-of-the-art method CMA-ES on separable benchmark test problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/06/2022

SABLAS: Learning Safe Control for Black-box Dynamical Systems

Control certificates based on barrier functions have been a powerful too...
04/28/2022

Benchmarking the Hooke-Jeeves Method, MTS-LS1, and BSrr on the Large-scale BBOB Function Set

This paper investigates the performance of three black-box optimizers ex...
06/22/2011

Natural Evolution Strategies

This paper presents Natural Evolution Strategies (NES), a recent family ...
05/24/2022

Regret-Aware Black-Box Optimization with Natural Gradients, Trust-Regions and Entropy Control

Most successful stochastic black-box optimizers, such as CMA-ES, use ran...
02/08/2022

Fourier Representations for Black-Box Optimization over Categorical Variables

Optimization of real-world black-box functions defined over purely categ...
08/10/2021

Asymptotic convergence rates for averaging strategies

Parallel black box optimization consists in estimating the optimum of a ...
11/03/2020

AdaDGS: An adaptive black-box optimization method with a nonlocal directional Gaussian smoothing gradient

The local gradient points to the direction of the steepest slope in an i...