Global Convergence of the (1+1) Evolution Strategy

06/09/2017
by   Tobias Glasmachers, et al.
0

We establish global convergence of the (1+1)-ES algorithm, i.e., convergence to a critical point independent of the initial state. The analysis is based on two ingredients. We establish a sufficient decrease condition for elitist, rank-based evolutionary algorithms, formulated for an essentially monotonically transformed variant of the objective function. This tool is of general value, and it is therefore formulated for general search spaces. To make it applicable to the (1+1)-ES, we show that the algorithm state is found infinitely often in a regime where step size and success rate are simultaneously bounded away from zero, with full probability. The main result is proven by combining both statements. Under minimal technical preconditions, the theorem ensures that the sequence of iterates has a limit point that cannot be improved in the limit of vanishing step size, a generalization of the notion of critical points of smooth functions. Importantly, our analysis reflects the actual dynamics of the algorithm and hence supports our understanding of its mechanisms, in particular success-based step size control. We apply the theorem to the analysis of the optimization behavior of the (1+1)-ES on various problems ranging from the smooth (non-convex) cases over different types of saddle points and ridge functions to discontinuous and extremely rugged problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2020

An Analysis of Constant Step Size SGD in the Non-convex Regime: Asymptotic Normality and Bias

Structured non-convex learning problems, for which critical points have ...
research
03/02/2021

Convergence Rate of the (1+1)-Evolution Strategy with Success-Based Step-Size Adaptation on Convex Quadratic Functions

The (1+1)-evolution strategy (ES) with success-based step-size adaptatio...
research
12/01/2021

The (1+1)-ES Reliably Overcomes Saddle Points

It is known that step size adaptive evolution strategies (ES) do not con...
research
08/31/2023

Frank-Wolfe algorithm for DC optimization problem

In the present paper, we formulate two versions of Frank–Wolfe algorithm...
research
05/18/2020

Convergence of constant step stochastic gradient descent for non-smooth non-convex functions

This paper studies the asymptotic behavior of the constant step Stochast...
research
07/22/2020

Examples of pathological dynamics of the subgradient method for Lipschitz path-differentiable functions

We show that the vanishing stepsize subgradient method – widely adopted ...
research
09/18/2020

Global Linear Convergence of Evolution Strategies on More Than Smooth Strongly Convex Functions

Evolution strategies (ESs) are zero-order stochastic black-box optimizat...

Please sign up or login with your details

Forgot password? Click here to reset