Distributed Evolution Strategies for Black-box Stochastic Optimization

04/09/2022
by   Xiaoyu He, et al.
0

This work concerns the evolutionary approaches to distributed stochastic black-box optimization, in which each worker can individually solve an approximation of the problem with nature-inspired algorithms. We propose a distributed evolution strategy (DES) algorithm grounded on a proper modification to evolution strategies, a family of classic evolutionary algorithms, as well as a careful combination with existing distributed frameworks. On smooth and nonconvex landscapes, DES has a convergence rate competitive to existing zeroth-order methods, and can exploit the sparsity, if applicable, to match the rate of first-order methods. The DES method uses a Gaussian probability model to guide the search and avoids the numerical issue resulted from finite-difference techniques in existing zeroth-order methods. The DES method is also fully adaptive to the problem landscape, as its convergence is guaranteed with any parameter setting. We further propose two alternative sampling schemes which significantly improve the sampling efficiency while leading to similar performance. Simulation studies on several machine learning problems suggest that the proposed methods show much promise in reducing the convergence time and improving the robustness to parameter settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2019

Zeroth-Order Stochastic Alternating Direction Method of Multipliers for Nonconvex Nonsmooth Optimization

Alternating direction method of multipliers (ADMM) is a popular optimiza...
research
02/16/2019

Faster Gradient-Free Proximal Stochastic Methods for Nonconvex Nonsmooth Optimization

Proximal gradient method has been playing an important role to solve man...
research
02/06/2021

Learning adaptive differential evolution algorithm from optimization experiences by policy gradient

Differential evolution is one of the most prestigious population-based s...
research
09/10/2013

Minimizing Finite Sums with the Stochastic Average Gradient

We propose the stochastic average gradient (SAG) method for optimizing t...
research
06/22/2011

Natural Evolution Strategies

This paper presents Natural Evolution Strategies (NES), a recent family ...
research
10/27/2018

Average Convergence Rate of Evolutionary Algorithms II: Continuous Optimization

A good convergence metric must satisfy two requirements: feasible in cal...
research
07/13/2019

Distributed Black-Box Optimization via Error Correcting Codes

We introduce a novel distributed derivative-free optimization framework ...

Please sign up or login with your details

Forgot password? Click here to reset