Computational cost for determining an approximate global minimum using the selection and crossover algorithm

05/24/2019
by   Takuya Isomura, et al.
0

This work examines the expected computational cost to determine an approximate global minimum of a class of cost functions characterized by the variance of coefficients. The cost function takes N-dimensional binary states as arguments and has many local minima. Iterations in the order of 2^N are required to determine an approximate global minimum using random search. This work analytically and numerically demonstrates that the selection and crossover algorithm with random initialization can reduce the required computational cost (i.e., number of iterations) for identifying an approximate global minimum to the order of λ^N with λ less than 2. The two best solutions, referred to as parents, are selected from a pool of randomly sampled states. Offspring generated by crossovers of the parents' states are distributed with a mean cost lower than that of the original distribution that generated the parents. It is revealed that in contrast to the mean, the variance of the cost of the offspring is asymptotically the same as that of the original distribution. Consequently, sampling from the offspring's distribution leads to a higher chance of determining an approximate global minimum than sampling from the original distribution, thereby accelerating the global search. This feature is distinct from the distribution obtained by a mixture of a large population of favorable states, which leads to a lower variance of offspring. These findings demonstrate the advantage of the crossover between two favorable states over a mixture of many favorable states for an efficient determination of an approximate global minimum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

Quadratic speedup of global search using a biased crossover of two good solutions

The minimisation of cost functions is crucial in various optimisation fi...
research
11/23/2018

Parallel sequential Monte Carlo for stochastic optimization

We propose a parallel sequential Monte Carlo optimization method to mini...
research
07/26/2020

Langevin Monte Carlo: random coordinate descent and variance reduction

Sampling from a log-concave distribution function on ℝ^d (with d≫ 1) is ...
research
06/26/2019

Monte Carlo Integration with adaptive variance selection for improved stochastic Efficient Global Optimization

In this paper, the minimization of computational cost on evaluating mult...
research
06/10/2020

Variance reduction for Langevin Monte Carlo in high dimensional sampling problems

Sampling from a log-concave distribution function is one core problem th...
research
01/31/2006

A Study on the Global Convergence Time Complexity of Estimation of Distribution Algorithms

The Estimation of Distribution Algorithm is a new class of population ba...
research
06/20/2017

Reputation blackboard systems

Blackboard systems are motivated by the popular view of task forces as b...

Please sign up or login with your details

Forgot password? Click here to reset