DeepAI AI Chat
Log In Sign Up

Convergence of the Continuous Time Trajectories of Isotropic Evolution Strategies on Monotonic C^2-composite Functions

06/21/2012
by   Youhei Akimoto, et al.
Laboratoire de Recherche en Informatique (LRI)
0

The Information-Geometric Optimization (IGO) has been introduced as a unified framework for stochastic search algorithms. Given a parametrized family of probability distributions on the search space, the IGO turns an arbitrary optimization problem on the search space into an optimization problem on the parameter space of the probability distribution family and defines a natural gradient ascent on this space. From the natural gradients defined over the entire parameter space we obtain continuous time trajectories which are the solutions of an ordinary differential equation (ODE). Via discretization, the IGO naturally defines an iterated gradient ascent algorithm. Depending on the chosen distribution family, the IGO recovers several known algorithms such as the pure rank-μ update CMA-ES. Consequently, the continuous time IGO-trajectory can be viewed as an idealization of the original algorithm. In this paper we study the continuous time trajectories of the IGO given the family of isotropic Gaussian distributions. These trajectories are a deterministic continuous time model of the underlying evolution strategy in the limit for population size to infinity and change rates to zero. On functions that are the composite of a monotone and a convex-quadratic function, we prove the global convergence of the solution of the ODE towards the global optimum. We extend this result to composites of monotone and twice continuously differentiable functions and prove local convergence towards local optima.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/16/2012

Objective Improvement in Information-Geometric Optimization

Information-Geometric Optimization (IGO) is a unified framework of stoch...
04/18/2012

Analysis of a Natural Gradient Algorithm on Monotonic Convex-Quadratic-Composite Functions

In this paper we investigate the convergence properties of a variant of ...
06/09/2022

A Continuous-Time Perspective on Optimal Methods for Monotone Equation Problems

We study rescaled gradient dynamical systems in a Hilbert space ℋ, where...
06/10/2021

A Continuized View on Nesterov Acceleration for Stochastic Gradient Descent and Randomized Gossip

We introduce the continuized Nesterov acceleration, a close variant of N...
12/16/2019

A Control-Theoretic Perspective on Optimal High-Order Optimization

In this paper, we provide a control-theoretic perspective on optimal ten...
10/27/2022

Stochastic Mirror Descent in Average Ensemble Models

The stochastic mirror descent (SMD) algorithm is a general class of trai...