DeepAI AI Chat
Log In Sign Up

Computational cost for determining an approximate global minimum using the selection and crossover algorithm

by   Takuya Isomura, et al.

This work examines the expected computational cost to determine an approximate global minimum of a class of cost functions characterized by the variance of coefficients. The cost function takes N-dimensional binary states as arguments and has many local minima. Iterations in the order of 2^N are required to determine an approximate global minimum using random search. This work analytically and numerically demonstrates that the selection and crossover algorithm with random initialization can reduce the required computational cost (i.e., number of iterations) for identifying an approximate global minimum to the order of λ^N with λ less than 2. The two best solutions, referred to as parents, are selected from a pool of randomly sampled states. Offspring generated by crossovers of the parents' states are distributed with a mean cost lower than that of the original distribution that generated the parents. It is revealed that in contrast to the mean, the variance of the cost of the offspring is asymptotically the same as that of the original distribution. Consequently, sampling from the offspring's distribution leads to a higher chance of determining an approximate global minimum than sampling from the original distribution, thereby accelerating the global search. This feature is distinct from the distribution obtained by a mixture of a large population of favorable states, which leads to a lower variance of offspring. These findings demonstrate the advantage of the crossover between two favorable states over a mixture of many favorable states for an efficient determination of an approximate global minimum.


page 1

page 2

page 3

page 4


Quadratic speedup of global search using a biased crossover of two good solutions

The minimisation of cost functions is crucial in various optimisation fi...

Parallel sequential Monte Carlo for stochastic optimization

We propose a parallel sequential Monte Carlo optimization method to mini...

Langevin Monte Carlo: random coordinate descent and variance reduction

Sampling from a log-concave distribution function on ℝ^d (with d≫ 1) is ...

Monte Carlo Integration with adaptive variance selection for improved stochastic Efficient Global Optimization

In this paper, the minimization of computational cost on evaluating mult...

Variance reduction for Langevin Monte Carlo in high dimensional sampling problems

Sampling from a log-concave distribution function is one core problem th...

A Study on the Global Convergence Time Complexity of Estimation of Distribution Algorithms

The Estimation of Distribution Algorithm is a new class of population ba...

Reputation blackboard systems

Blackboard systems are motivated by the popular view of task forces as b...