DeepAI AI Chat
Log In Sign Up

Global Linear Convergence of Evolution Strategies on More Than Smooth Strongly Convex Functions

by   Youhei Akimoto, et al.

Evolution strategies (ESs) are zero-order stochastic black-box optimization heuristics invariant to monotonic transformations of the objective function. They evolve a multivariate normal distribution, from which candidate solutions are generated. Among different variants, CMA-ES is nowadays recognized as one of the state-of-the-art zero-order optimizers for difficult problems. Albeit ample empirical evidence that ESs with a step-size control mechanism converge linearly, theoretical guarantees of linear convergence of ESs have been established only on limited classes of functions. In particular, theoretical results on convex functions are missing, where zero-order and also first order optimization methods are often analyzed. In this paper, we establish almost sure linear convergence and a bound on the expected hitting time of an ES, namely the (1 + 1)-ES with (generalized) one-fifth success rule and an abstract covariance matrix adaptation with bounded condition number, on a broad class of functions. The analysis holds for monotonic transformations of positively homogeneous functions and of quadratically bounded functions, the latter of which particularly includes monotonic transformation of strongly convex functions with Lipschitz continuous gradient. As far as the authors know, this is the first work that proves linear convergence of ES on such a broad class of functions.


page 1

page 2

page 3

page 4


LM-CMA: an Alternative to L-BFGS for Large Scale Black-box Optimization

The limited memory BFGS method (L-BFGS) of Liu and Nocedal (1989) is oft...

Hamiltonian Descent Methods

We propose a family of optimization methods that achieve linear converge...

SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation

We provide several convergence theorems for SGD for two large classes of...

Global Convergence of the (1+1) Evolution Strategy

We establish global convergence of the (1+1)-ES algorithm, i.e., converg...

A Letter on Convergence of In-Parameter-Linear Nonlinear Neural Architectures with Gradient Learnings

This letter summarizes and proves the concept of bounded-input bounded-s...

Black-Box Min–Max Continuous Optimization Using CMA-ES with Worst-case Ranking Approximation

In this study, we investigate the problem of min-max continuous optimiza...