Global Convergence of the (1+1) Evolution Strategy

06/09/2017
by   Tobias Glasmachers, et al.
0

We establish global convergence of the (1+1)-ES algorithm, i.e., convergence to a critical point independent of the initial state. The analysis is based on two ingredients. We establish a sufficient decrease condition for elitist, rank-based evolutionary algorithms, formulated for an essentially monotonically transformed variant of the objective function. This tool is of general value, and it is therefore formulated for general search spaces. To make it applicable to the (1+1)-ES, we show that the algorithm state is found infinitely often in a regime where step size and success rate are simultaneously bounded away from zero, with full probability. The main result is proven by combining both statements. Under minimal technical preconditions, the theorem ensures that the sequence of iterates has a limit point that cannot be improved in the limit of vanishing step size, a generalization of the notion of critical points of smooth functions. Importantly, our analysis reflects the actual dynamics of the algorithm and hence supports our understanding of its mechanisms, in particular success-based step size control. We apply the theorem to the analysis of the optimization behavior of the (1+1)-ES on various problems ranging from the smooth (non-convex) cases over different types of saddle points and ridge functions to discontinuous and extremely rugged problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset