Convergence of the Continuous Time Trajectories of Isotropic Evolution Strategies on Monotonic C^2-composite Functions

06/21/2012
by   Youhei Akimoto, et al.
0

The Information-Geometric Optimization (IGO) has been introduced as a unified framework for stochastic search algorithms. Given a parametrized family of probability distributions on the search space, the IGO turns an arbitrary optimization problem on the search space into an optimization problem on the parameter space of the probability distribution family and defines a natural gradient ascent on this space. From the natural gradients defined over the entire parameter space we obtain continuous time trajectories which are the solutions of an ordinary differential equation (ODE). Via discretization, the IGO naturally defines an iterated gradient ascent algorithm. Depending on the chosen distribution family, the IGO recovers several known algorithms such as the pure rank-μ update CMA-ES. Consequently, the continuous time IGO-trajectory can be viewed as an idealization of the original algorithm. In this paper we study the continuous time trajectories of the IGO given the family of isotropic Gaussian distributions. These trajectories are a deterministic continuous time model of the underlying evolution strategy in the limit for population size to infinity and change rates to zero. On functions that are the composite of a monotone and a convex-quadratic function, we prove the global convergence of the solution of the ODE towards the global optimum. We extend this result to composites of monotone and twice continuously differentiable functions and prove local convergence towards local optima.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2012

Objective Improvement in Information-Geometric Optimization

Information-Geometric Optimization (IGO) is a unified framework of stoch...
research
04/18/2012

Analysis of a Natural Gradient Algorithm on Monotonic Convex-Quadratic-Composite Functions

In this paper we investigate the convergence properties of a variant of ...
research
02/14/2022

Continuous-time stochastic gradient descent for optimizing over the stationary distribution of stochastic differential equations

We develop a new continuous-time stochastic gradient descent method for ...
research
06/09/2022

A Continuous-Time Perspective on Optimal Methods for Monotone Equation Problems

We study rescaled gradient dynamical systems in a Hilbert space ℋ, where...
research
06/10/2021

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip

We introduce the continuized Nesterov acceleration, a close variant of N...
research
12/16/2019

A Control-Theoretic Perspective on Optimal High-Order Optimization

In this paper, we provide a control-theoretic perspective on optimal ten...
research
10/27/2022

Stochastic Mirror Descent in Average Ensemble Models

The stochastic mirror descent (SMD) algorithm is a general class of trai...

Please sign up or login with your details

Forgot password? Click here to reset