A hybrid evolutionary algorithm with importance sampling for multi-dimensional optimization

08/23/2013 ∙ by Guanghui Huang, et al. ∙ Chongqing University 0

A hybrid evolutionary algorithm with importance sampling method is proposed for multi-dimensional optimization problems in this paper. In order to make use of the information provided in the search process, a set of visited solutions is selected to give scores for intervals in each dimension, and they are updated as algorithm proceeds. Those intervals with higher scores are regarded as good intervals, which are used to estimate the joint distribution of optimal solutions through an interaction between the pool of good genetics, which are the individuals with smaller fitness values. And the sampling probabilities for good genetics are determined through an interaction between those estimated good intervals. It is a cross validation mechanism which determines the sampling probabilities for good intervals and genetics, and the resulted probabilities are used to design crossover, mutation and other stochastic operators with importance sampling method. As the selection of genetics and intervals is not directly dependent on the values of fitness, the resulted offsprings may avoid the trap of local optima. And a purely random EA is also combined into the proposed algorithm to maintain the diversity of population. 30 benchmark test functions are used to evaluate the performance of the proposed algorithm, and it is found that the proposed hybrid algorithm is an efficient algorithm for multi-dimensional optimization problems considered in this paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Optimization to a problem is a process for seeking better or best alternative solution from a number of possible solutions [1]. As the analytical optimal solution is difficult to obtain even for relatively simple application problems, the need for numerical optimization algorithm arises from almost every field of engineering design, systems operation, decision making, and computer science [3, 2, 4]. In global optimization problems, the particular challenge is that an algorithm may be trapped in the local optima of the objective function when the dimension is high and there are numerous local optima [5].

Typical conventional search methods include steepest descent methods, conjugate gradient, quadratic programming, and linear approximation methods. These strategies rely on local information of the objective function to decide on their next move in the neighborhood of visited solutions. Their main advantage is the efficiency, however, they tend to be sensitive to starting point selection, and more likely to settle at non-global optima than modern stochastic algorithms [1].

Modern stochastic algorithms such as evolutionary algorithms (EAs) draw inspiration from biological evolution. They guide the evolution of a set of randomly selected individuals through a number of generations in approaching the global optimum solution, making use of competitive selection, recombination, crossover, mutation or other stochastic operators to generate new solutions [1, 4]. They only require information of the objective function itself, and other accessory properties such as differentiability or continuity are not necessary. And EAs essentially work with building blocks, which increase exponentially as the evolution through generations proceeds. This results an efficient exploitation of the given search space.

Modern stochastic optimizers include simulated annealing, Tabu search, genetic algorithms, evolutionary programming, evolution strategies, differential evolution, and others

[6, 7, 8, 9, 10, 11, 12, 13, 14, 17, 15, 18, 19, 20, 16, 21]. Most of the successful applications of EAs are limited to problems with dimensions below 30 [14, 17, 15, 16]. Only in the last decade, did researchers begin to test their EAs on problems with more than 30 dimensions [22, 23, 24, 25, 26, 27, 28, 31, 29, 30, 5].

To deal with these high dimensional and complex problems effectively and enhance EAs, many researchers have tried to combine techniques from other research fields into EAs. The combination of evolutionary algorithm with local search approach is known as Memetic or Hybrid algorithm [32]. Several new designed hybrid algorithms have been applied to practical problems [34, 35, 36, 37, 33]. The studies on hybrid algorithm have demonstrated that they converge to high quality solutions more efficiently than their conventional counterparts [1]. The purpose of this paper is to develop a more efficient hybrid EA for high dimensional optimization problems.

Several local search methods have been successfully combined into EAs. A robust stochastic genetic algorithm (StGA) for global numerical optimization is given in [4]

, where a stochastic coding scheme based on Gaussian distribution is proposed. A mutation operator based on Cauchy distribution was proposed as a “fast evolutionary programming”

[17], and a further generalization of the mutation operator with Lévy distribution was given in [31]. These algorithms are based on the assumptions about the sampling distributions. In order to avoid the influence of distributional assumption, an non-parameterized importance sampling method is proposed in this paper.

Experimental design methods have been successfully combined into EAs [3]. Zhang and Leung were the first to combine the orthogonal design into EAs for a discrete optimization problem [38], and Li and Smith used Latin squares to improve EAs [39]. Tsai et al. combined the Taguchi method into a genetic algorithm [40]. Other researchers set up a marginal model to estimate the distribution of globally optimal solutions for any problem and obtained good results [42, 41]. On the other hand, the estimation of marginal distribution is not enough for high dimensional optimization problems, due to the number of possible combinations increases exponentially with larger scale of problems.

A relatively simple method is proposed to estimate the joint distribution of optimal solutions in this paper. It is supposed that the interval which makes an individual a smaller value of fitness than the value of a similar individual should be given a larger value of probability in the estimated joint distribution, therefore a set of genetics is selected from the visited solutions to give a score for each interval, and those intervals with scores beyond the quantiles are regarded as good intervals for each dimension. On the other hand, the solutions with smaller values of fitness are regarded as good genetics, and those good individuals with more elements falling into good intervals are more likely to be optimal solutions, which should be given a larger probability of selection. At the same time, those good intervals with more good genetics appearing should be given a larger probability of selection. It is a cross validation between good intervals and the pool of good genetics which determines the importance sampling probabilities for good intervals and good genetics in this paper.

Many stochastic algorithms do not memorize places where they have visited, and the information about the evaluated solutions is not taken into consideration for further search. In order to improve the efficiency of EA, a genetic algorithm that adaptively mutates and never revisits was proposed by [43]. And an evolutionary algorithm based on the entire previous search history (HdEA) was proposed in [44]. However, there are more and more visited solutions needed to be memorized as algorithm proceeds, such that the requirement of memory may be extremely large. In order to use the information provided by the previous search process, and to avoid the extra requirement of memory ability, only part of the visited solutions are selected and used to give scores for the intervals in this paper. They are updated from one generation to the next, and the requirement of memory is a parameter which can be adjusted during the process of algorithm design.

Premature population convergence about a local optimum is a common problem of traditional genetic algorithms [45]. It is a result of individuals hastily congregating within a small region of the search space [46]. Maintaining a diverse population is very important for evolutionary algorithms, which means that the selection of individuals can not only dependent on their fitness scores, and other principle such as the diversity proposed in [46] should be taken into consideration. The distributions of importance sampling for individuals and intervals are determined through a cross validation mechanism between the pool of good genetics and the good intervals in this paper, which is not related to the values of fitness directly. And a purely random EA is combined into the proposed algorithm to maintain the diversity of individuals in this paper.

test functions and benchmark evolutionary algorithms are selected to evaluate the performance of the proposed algorithm. There are new optimal solutions found in our numerical investigations, solutions similar to the best results reported in the literature, and solutions closed to the best results. On the other hand, there are test functions where the proposed algorithm can not find the optimal solutions efficiently. However, the proposed algorithm has the smallest number of fitness values which are different from the optimal solutions with respect to the order of magnitude among the algorithms considered in this paper.

The remainder of this paper is structured as follows. Section II describes the problem of optimization for multidimensional functions. The details of hybrid EA are given in Section III. Section IV is devoted to the empirical investigations of the proposed algorithm through 30 test functions. And conclusions and discussions are given in Section V.

Ii Optimization problem

The problem we consider is an unconstrained global optimization problem

(1)

where

is a vector with

elements, is a subset of , where and are the lower and upper boundaries of respectively. The value of objective function at point is called the fitness value of in this paper. The purpose of optimization is to find the solutions which make the objective function reach its minimum value.

Iii Hybrid EA with importance sampling

Fig. 1: HisEA for multi-dimensional optimization problems.

Canonical EA is an optimization algorithm based on population, where individuals are used to generate the offspring generation with genetic operators, such as mutation, crossover, and selection. The individuals with smaller values of fitness are survival from the evolution of population. While the information provided by those individuals which are not survival is completely dropped in further searching process. Some researchers have suggested to use those information efficiently to improve the performance of EAs [43, 44]

. Following this line, the information obtained in the process of searching is used to design new crossover operator, mutation operator, interpolation operator with importance sampling method in this paper.

Iii-a Initiation

The individuals of first generation are randomly generated within the search space, where the size of the first generation is a predetermined parameter. There are individuals chosen to be the pool of good genetics. The range of search in each dimension is partitioned into subintervals with equal length. And is the base number of new generated individuals, where the numbers of new generated solutions for crossover, mutation and interpolation operators are several times of respectively in the following sections. There are only four parameters needed to be determined before the application of the hybrid EA with importance sampling method (HisEA). Figure 1 is the flow chart of HisEA.

Iii-B Fitness scores of individuals

Suppose the individuals in the current pool of good genetics are , whose values of fitness are with increasing order, and the maximum value of fitness in the current search history is denoted as . The score for the ith individual is defined as

(2)

which indicates that the individual with smaller value of fitness will be given a relatively larger score among the current pool of good genetics. And will be updated in the following search process, such that the score for each individual is changeable to update the new information achieved in the process of search.

Iii-C Scores of intervals

As each dimension of the search space has been partitioned into equal subintervals, the length of one interval for the ith dimension is

(3)

where is the kth interval of the ith dimension, and are the partition points of this dimension, and

(4)

Iii-C1 Selection of scoring genetics for intervals

A pool of genetics is selected from all of the evaluated solutions to give a score for each interval of every dimension with the following algorithm.

  1. Initiation according to the first dimension. Denote the genetics in the first generation as , . For the kth interval of the first dimension , if , , where , then and are put into . In other words, the first two solutions whose elements of the first dimension are in the same interval are selected according to their fitness. And the solution with maximum value of fitness is included in .

  2. Repeat step (1) for .

  3. Update according to new evaluated solutions. If a new evaluated solution is belong to the kth interval, that is to say , the first two solutions with smaller values of fitness among the previous selected genetics and the new genetic are selected for the kth interval. And the solution with maximum value of fitness is updated.

Iii-C2 Selection of good intervals

is used to give a score for each subinterval of every dimension, and denote the score for the jth interval of the ith dimension, , . The score matrix is used to determine the good intervals for each dimension with the following algorithm.

  1. Initiation. Set , , .

  2. For the ith dimension and the jth interval, find the genetics in whose ith elements are in the jth subinterval , denote as , where is the number of genetics appearing in the jth subinterval.

    1. Case 1. , set .

    2. Case 2. , select the first two genetics and according to their order in , and denote their weights as and (). For , denote and , set

      (5)
  3. Repeat step (2) for and .

  4. Suppose is the quantile of the kth row of the score matrix . If , the subinterval is said to be a good interval for the kth dimension.

  5. If and are both good intervals, set .

  6. Repeat Step (5) until there is no more subinterval to be combined. And is said to be the mth good interval for the kth dimension.

Iii-D Sampling probabilities of individuals

The individuals in the pool of good genetics are chosen according to their values of fitness. In order to describe the distributional information among all of the dimensions, the sampling probabilities of individuals are chosen in the following way. Denote the indicator function for . Let

(6)

The score of is

(7)

which is the number of elements falling into the good intervals.

Denote the probability that the ith individual is chosen among the individuals in the pool of good genetics,

(8)

which means that it is more possible to be chosen for those individuals with more elements falling into the good intervals.

The sampling probabilities for individuals are not directly based on the values of fitness in this paper, which can be regarded as an alternative choice to maintain the diversity of population.

Iii-E Sampling probabilities of intervals

There is a cross validation mechanism between the chosen good intervals and the individuals in the pool of good genetics, which is used to determine the sampling probabilities of the individuals in the previous section, and to determine the sampling probabilities of intervals in this section with the following algorithm.

  1. Initiation. Let be the number of individuals falling into the kth good interval of the mth dimension, , and , where is the number of good intervals of the mth dimension.

  2. Denote an individual in the pool of good genetics, if , where is some integer between 1 and , let , otherwise .

  3. Repeat step (2) for .

  4. Repeat step (2) and (3) for .

  5. The sampling probability for the kth good interval of the mth dimension is

    (9)

The estimated sampling probabilities for individuals and intervals are used to design a crossover operator, two kinds of mutation operators, and an interpolation operator with importance sampling method in the following sections.

Iii-F Crossover operator with importance sampling

Crossover operator is used to generate new individuals from their parents. As the elements in the good intervals are more likely to be the optimal solutions, they will be kept in the offsprings, and those elements not in the good intervals are replaced by the elements of the other parent which are in the good intervals as the following algorithm.

  1. Sampling two different individuals from the pool of good genetics with the importance sampling probabilities , say and .

  2. Find the elements in the good intervals for and , whose positions are indicated by two indicators, denoted as and respectively, where means that the jth element in the ith individual is falling into the good intervals, , . Otherwise .

  3. Generate one individual from with elements chosen from by the following algorithm:

    1. If and , ;

    2. If and , ;

    3. If and , ;

    4. If and , .

  4. Repeat step (3) for .

  5. Repeat step (1) to (4) times to generate a set of new genetics.

The proposed algorithm is based on the pool of good genetics, which are chosen according to their values of fitness. On the other hand, the two parents to generate new individuals are sampled with the importance sampling probabilities, which are not directly related to the values of fitness. And the result of crossover is related to the estimation of good intervals, which can be regarded as the estimation of the joint distribution of the optimal solutions. This is the difference between the proposed hybrid algorithm and the traditional EAs.

Iii-G Mutation operators with importance sampling

There are two kinds of mutation operators proposed in this section, which are all based on the importance sampling probabilities.

Iii-G1 Locally adjusting algorithm

There may be some individuals in the pool of good genetics whose elements are not all falling into the good intervals. In order to make those individuals look more like good genetics, a locally adjusting algorithm is proposed as the following steps.

  1. Select one of the individuals in the pool of good genetics according to the probabilities , denote as , and denote the individual to be generated, set .

  2. Mutation for the kth dimension. If there dose not exist any such that , select one of the good intervals of the kth dimension according to , denoted as , and is adjusted as , where

    is an uniformly distributed random variable on

    .

  3. Repeat step 2 for . If there is no dimension to be adjusted, there is no new genetic to be generated in this run.

  4. Repeat times to generate a set of new individuals.

Iii-G2 Entirely adjusting algorithm

Another mutation algorithm is proposed to explore the visited space as the following steps.

  1. Select one of individuals in the pool of good genetics according to the probabilities , denote as , and denote the individual to be generated.

  2. Mutation for the kth dimension with the following algorithm:

    1. If there dose not exist any such that , select one of the good intervals with , denoted as , and is adjusted as .

    2. If there exists some , such that , is adjusted as , where is an uniformly distributed random variable on .

  3. Repeat step (2) for .

  4. Repeat step (1) to (3) times to generate a set of new individuals.

The difference between these two kinds of mutation operators is that the elements falling into the good intervals are not adjusted by the locally adjusting algorithm, while which are adjusted by the entirely adjusted algorithm.

Iii-H Interpolation operator with importance sampling

In order to search the space between two suboptimal solutions, an interpolation operator is adopted in this paper, where the estimated good intervals are used to guide the direction of search as the following steps.

  1. Randomly choose two individuals in the pool of good genetics according to the probabilities ,, , , , denoted as and .

  2. Generate the element for the ith dimension with the following algorithm:

    1. If there exists two good intervals and such that and . Set , where denotes the largest integer which is less than or equal to , and generate the ith dimension for the new individual as

      (10)

      where is uniformly distributed on .

    2. If there exists one good interval such that , and no good interval to contain . The ith dimension for the new individual is

      (11)
    3. If there exists one good interval such that , and no good interval to contain . The ith dimension for the new individual is

      (12)
    4. If there exists no good interval to contain any of the two samples and , a good interval for the ith dimension is randomly selected according to the probabilities , , where is the number of good intervals for the ith dimension, denoted as . The ith dimension for the new individual is

      (13)
  3. Repeat Step 2 for to generate a new individual.

  4. Repeat Step 1 to Step 3 times to generate a set of new individuals.

Iii-I Random sampling

In order to explore the search space, there are two kinds of random sampling methods adopted in this paper, one of which is based on the probabilities of importance sampling, and the other one is not related to the information obtained in the process of searching.

Iii-I1 Importance sampling algorithm

Importance sampling algorithm is designed to explore the search space, where the estimated distribution of optimal solutions is involved in the following steps.

  1. For the ith dimension, one of the estimated good subinterval is sampled according to the probabilities , denoted as .

  2. Randomly sampling one sample from as

    (14)

    where is uniformly distributed within .

  3. Repeat Step 1 to Step 2 for .

  4. Repeat Step 1 to Step 3 times to generate a set of new individuals.

As more and more individuals are generated from the estimated good intervals, the resolution of these intervals is improved.

Iii-I2 Purely random sampling

In order to keep the diversity of the chosen good genetics, and to reduce the risk of premature, a purely random sampling method is adopted as the following steps.

  1. For the ith dimension, of individual is

    (15)

    where and are the lower and upper boundaries for the ith dimension respectively.

  2. Repeat Step 1 for .

  3. Repeat Step 1 to Step 2 times to generate a set of new individuals.

Iii-J Purely random EA

In order to keep the diversity of genetics, and to escape the trap of local optimal solutions, an evolutionary algorithm with purely random crossover and mutation operators is adopted in this paper, which is dependent on the pool of good genetics, but does not use the information from the previous search process.

Iii-J1 Purely random crossover

A purely random crossover operator is adopted in this paper as following steps.

  1. Select two individuals in the pool of good genetics with equally possibility, denoted as and .

  2. Denote and the new individuals to be generated.

  3. For the ith dimension, randomly sample a number , where

    is a binomial distributed variable

    . The elements of and are determined with the following algorithm.

    1. If , , and .

    2. If , , and .

  4. Repeat Step 3 for .

  5. Repeat Step 1 to Step 4 times to generate a set of new individuals.

As the result of random trail is equally distributed between and , the crossover between and is purely random, which is designed to maintain the diversity of population in this paper.

Iii-J2 Purely random mutation

A similar algorithm for mutation is adopted in this paper, where the element of the solution is randomly selected to be mutated with the following algorithm.

  1. Randomly select one individual in the pool of good genetics with equally probabilities, denoted as .

  2. Randomly sample a value of . Denote the new genetic as , whose element in the ith dimension is determined by the following algorithm.

    1. If , , where .

    2. If , .

  3. Repeat Step 2 for .

  4. Repeat Step 1 to Step 3 times to generate a set of individuals.

The total number of new generated individuals with all of the previous operators is for each run of the hybrid algorithm, where individuals generated without the information obtained in the process of search are , which is designed to maintain the diversity of population.

Iii-K Mature condition

A pool of good genetics is used to generate new individuals, whose values of fitness are evaluated and compared to their parents, and a new pool of good genetics is selected from the parents and offsprings according to their values of fitness. Denote the former pool of good genetics, and the new pool of good genetics. The stopping condition is based on the result of comparison between and with the following algorithm.

Select a set of quantiles, denoted as , where is the number of quantiles to be taken into consideration. The quantiles of for each dimension are denoted as

(16)

where is the kth quantile for the ith dimension, , and . The similar quantiles for is denoted as . The difference between those two kinds of quantiles is

(17)

and the hybrid EA is stopped when or the number of loops is beyond times, where the quantiles are those points from to with step length in this paper.

Iv Empirical investigations

To evaluate the performance of the proposed algorithm, the optimal values of fitness founded by HisEA are compare to their counterparts of 9 benchmark evolutionary algorithms for 30 test functions in this paper.

Iv-a Algorithms for comparison

Iv-A1 HdEA

HdEA is an evolutionary algorithm that uses the entire search history to improve its mutation strategy [44]. It uses the fitness function approximated from the search history to perform mutation. Since the proposed mutation operator is adaptive and parameter-less, HdEA has only three control parameters: neighborhood size, population size, and crossover rate. The source code of HdEA is available at http://www.ee.cityu.edu.hk/ syyuen/Public/Code.html.

Iv-A2 Rcga-Undx

Real Coded GA With Uni-Modal Normal Distribution Crossover (RCGA-UNDX) is a real coded GA that deals with continuous search spaces

[47, 44]. It applies the uni-modal normal distribution crossover (UNDX) to preserve the statistics of the population. UNDX is a multiparent genetic operator in which the distribution of the corresponding offspring follows the distribution of the parents.

Iv-A3 Cma-Es

Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is an evolution strategy that adapts the full covariance matrix of a normal search (mutation) distribution [48, 44]

. An important property of CMA-ES is its invariance against linear transformations of the search space. The underlying idea is to gather information about successful search steps to modify the covariance matrix of the mutation distribution in a de-randomized, goal directed fashion. Changes to the covariance matrix are such that variances in directions of the search space that have previously been successful are increased, while those in other directions decrease passively. The accumulation of information over a number of search steps makes it possible to reliably adapt the covariance matrix even when using small populations. CMA-ES is designed with the emphasis that the same parameters are used in all applications in order to be “parameter-less.” The source code of CMA-ES is taken from

[48] (Aug. 2007 version).

Iv-A4 De

Differential evolution (DE ) is a stochastic search algorithm [49, 44]

. The basic idea behind DE is a scheme that generates trial parameter vectors. DE adds the weighted difference between two population vectors to a mutant vector, and the trial vector is the crossover between the mutant vector and the parent vector. By doing so, no separate probability distribution is used, which makes the scheme completely self-organizing.

Iv-A5 Ode

Opposition-based differential evolution (ODE) utilizes the concept of opposition-based learning (OBL) [50] to accelerate the convergence rate of DE. The main idea behind OBL is the simultaneous considerations of a solution and its corresponding opposite solution. ODE considers the evaluations of the opposite solution in a generation depending on a jumping rate [51, 50, 44].

Iv-A6 DEahcSPX

Differential Evolution With Adaptive Hill-Climbing Simplex Crossover (DEahcSPX) attempts to accelerate the classic DE by a local search strategy, named adaptive hill-climbing crossover-based local search. It adopts the simplex crossover operation (SPX) to generate offspring individual for hill-climbing [44, 42, 50].

Iv-A7 Dpso

Dissipative Particle Swarm Optimization (DPSO) is a modified PSO which introduces random mutation that helps particles to escape from local minima. Its formula is described as follows: If

then where and are uniformly distributed random variables in the range , is the mutation rate to control the velocity, is a constant to control the extent of mutation, and is the maximum velocity [52, 44] .

Iv-A8 Sepso

PSO With Spatial Particle Extension (SEPSO) is another modified PSO which introduces the spatial particle extension model to increase the diversity. When particles start to cluster and collide, they bounce off by adjusting their velocities [44, 53].

Iv-A9 Eda

EDA is based on undirected graphical model and Bayesian network. The source code of the EDA is taken from

[54] (Feb. 2009 version). The implementation is conceived to allow the user different combinations of selection, learning, sampling, and local search procedures [54, 44].

Each of the above algorithms was executed to some of the test functions, and the results were reported in [44] and the references therein. We use existing results for a direct comparison in this section.

Iv-B Simulations and results

Fig. 2: Convergence of HisEA for 2-dimensional test functions.
Fig. 3: Convergence of HisEA for 30-dimensional test functions.
Function
Dimension 30 30 30 30 30 30 30 30
Optimum 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
HdEA average 0.0000 0.0000 16920.2300 10.8802 21.1276 10.4615 0.0000 0.0000
std. dev. 0.0000 0.0000 2818.2200 1.3212 13.7111 0.6028 0.0000 0.0000
RCGA average 0.0000 0.0000 2811.5869 91.8556 71.1275 8.8836 219.8173 1.6946
-UNDX std. dev. 0.0000 0.0000 1668.9500 20.8748 67.2881 0.4382 12.6701 0.1151
CMA-ES average 0.0000 0.0000 0.0000 0.0000 0.0000 0.2303 53.6481 0.0014
std. dev. 0.0000 0.0000 0.0000 0.0000 0.0000 0.0893 14.1662 0.0036
DE average 0.0221 0.3594 26073.0400 49.5082 614.4588 10.9846 25.7105 0.9931
std. dev. 0.0048 0.0371 3339.0600 4.0114 112.1748 0.5641 2.6500 0.0301
ODE average 0.0000 0.0953 78.1691 0.0111 27.0344 8.5199 102.3277 0.0258
std. dev. 0.0000 0.0486 45.1045 0.0808 0.7121 0.4121 37.9763 0.0326
DEahcSPX average 0.1075 0.0322 65.9908 15.5882 3160.5891 8.7536 40.8070 0.1613
std. dev. 0.3990 0.1423 65.2220 3.5293 9387.6000 0.5200 26.7653 0.2685
DPSO average 3.3462 8.8458 1955.2153 10.5390 12789.4900 9.9202 125.4958 4.0567
std. dev. 0.9949 1.1698 24.5853 1.3743 78.0953 0.8192 3.9027 0.8773
SEPSO average 2.8527 8.8184 2984.5912 13.8670 10464.0700 10.6244 112.9555 3.6995
std. dev. 0.9324 1.3980 29.6756 1.7487 80.0175 0.8755 4.5789 0.9511
EDA average 3439.5320 22.2520 3749.1330 21.1420 30214.7330 100.0690 188.3840 30.5040
std. dev. 1221.2100 5.4338 1294.7400 5.9330 12162.0800 52.0020 20.5550 10.9870
HisEA average 0.0000 0.0000 319.4241 0.9769 28.0095 0.0002 0.0800 0.0031
std.dev. 0.0000 0.0000 242.1950 0.2552 1.5109 0.0004 0.2425 0.0042
TABLE I:

Average, Standard Deviation of the Best Fitness values for

.
function
Dimension 30 30 2 2 2 30 30 30
Optimum 0.0000 -1.0316 0.3980 3.0000 0.0000 0.0000 0.0000
HdEA average -1.37E+04 0.00E+00 -1.0316 4.01E-01 4.41E+00 0.00E+00 0.00E+00 2.61E+02
std. dev. 2.54E+01 0.00E+00 1.00E-04 2.64E-02 4.08E+00 0.00E+00 0.00E+00 3.50E+01
RCGA average -5.94E+03 2.07E+01 -0.6587 4.60E-01 5.61E+01 6.92E+07 3.33E-01 2.94E+02
-UNDX std. dev. 4.87E+02 9.04E-02 3.12E-01 5.65E-02 2.96E+01 1.20E+07 9.54E-02 3.69E+01
CMA-ES average -5.40E+03 2.13E+01 -1.0235 3.98E-01 7.32E+00 4.14E+01 1.15E-02 0.00E+00
std. dev. 9.56E+01 4.32E-01 8.16E-02 0.00E+00 1.66E+01 1.01E+02 1.15E-01 0.00E+00
DE average -1.28E+04 1.83E+00 -0.6695 1.52E+00 1.48E+01 4.66E+03 3.20E-03 2.74E+02
std. dev. 1.56E+02 3.32E-01 3.21E-01 1.35E+00 1.04E+01 9.26E+02 8.00E-04 3.00E+01
ODE average -5.47E+03 9.90E-03 -1.0214 4.25E-01 3.52E+00 1.16E+00 1.50E-05 2.43E+01
std. dev. 5.06E+02 1.17E-02 1.09E-02 2.77E-02 4.77E-01 1.25E+00 0.00E+00 8.07E+00
DEahc- average -1.02E+04 2.82E+00 -0.4882 6.68E+00 2.18E+01 3.82E+04 5.98E-02 2.21E+00
SPX std. dev. 6.92E+02 4.03E+00 3.29E+00 1.93E+01 8.50E+01 2.01E+05 1.70E-01 3.85E+00
DPSO average -5.18E+03 6.07E+00 -1.0229 1.44E+00 3.14E+00 8.26E+06 1.31E+01 1.35E+02
std. dev. 2.57E+01 8.21E-01 1.17E-01 1.62E+00 6.47E-01 1.93E+03 1.66E+00 7.91E+00
SEPSO average -7.70E+03 6.44E+00 -1.0252 4.10E-01 3.06E+00 6.28E+06 1.40E+01 7.18E+01
std. dev. 2.73E+01 1.00E+00 1.04E-01 1.37E-01 2.93E-01 1.55E+03 1.45E+00 4.87E+00
EDA average -4.67E+03 1.02E+01 -1.031 3.98E-01 3.00E+00 6.75E+06 7.42E+04 6.97E+01
std. dev. 7.03E+02 1.29E+00 1.20E-03 0.00E+00 2.00E-06 4.13E+06 5.60E+04 2.91E+01
HisEA average -12558.8751 0.0000 -1.0316 0.3979 3.0000 2.46E+05 0.0000 0.3054
std.dev. 29.1137 0.0000 0.0000 0.0000 0.0000 1.91E+05 0.0000 0.1539
TABLE II: Average, Standard Deviation of the Best Fitness values for .
Function
Dimension 30 30 30 30 30 30 30 30
Optimum 0.0000 0.0000 -29.0000 0.0000 0.0000 -4930.0000
HdEA average 0.0004 4.8663 -24.9443 0.0000 -25.3678 0.1626 8025.425 -997867
std. dev. 0.0002 0.3451 0.9411 0.0000 0.572 0.4616 4773.2 0.0271
RCGA- average 10.2837 7.61 -6.633 0.3566 -8.8451 152.9672 62837.03 -330
UNDX std. dev. 1.3909 0.5734 0.452 0.073 0.5691 17.6976 3012.7 0.0000
CMA-ES average 0.0025 13.7823 -0.9678 0.4493 -19.1834 0.023 -2428.19 -951
std. dev. 0.0028 0.2792 0.732 0.258 1.8797 0.0472 0.0000 187570.7
DE average 0.1641 5.3987 -18.8816 0.0027 -18.3183 60.0966 122598.2 -958473
std. dev. 0.0486 0.5198 0.556 0.0006 0.6445 10.9953 25422.81 6695.09
ODE average 0.0299 0.0237 -27.8856 0.000027 -12.5543 26.0994 -4930 -610112
std. dev. 0.0099 0.1431 1.8404 0 1.2739 12.9456 1162.05 37311.6
DEahc- average 0.0013 4.6963 -14.6745 0.1752 -12.9365 37.1675 1911.297 -996116
SPX std. dev. 0.0078 1.3226 4.0927 0.1499 2.0401 17.6322 4085.18 5578.9
DPSO average 5.758 11.8132 -15.4114 0.6795 -10.3292 135.0221 26342.66 -342933
std. dev. 1.3801 0.521 1.2455 0.4754 0.8904 6.6864 86.3998 217.3297
SEPSO average 9.5052 12.0147 -16.9436 0.7684 -10.9954 152.5561 30572.09 -965431
std. dev. 1.7957 0.532 1.2194 0.5383 0.9763 7.1358 100.8033 24.4007
EDA average 12.235 12.309 -18.728 1.885 -9.361 10.527 141156.77 -286765.1
std. dev. 3.110696 0.17794 4.185106 0.444738 0.75983 4.54527 65517.19 36881.35
HisEA average 0.0027 2.6454 -28.9299 0.0000 -25.9147 0.0000 2834.5739 -984105.1432
std.dev. 0.0018 0.5052 0.1942 0.0000 0.9994 0.0000 1627.9243 3769.7065
TABLE III: Average, Standard Deviation of the Best Fitness values for .
function
Dimension 30 30 30 30 30 30
Optimum 0.9000 0.0000 -3.5000
HdEA average 1.0004 1.2051 -2E+34 -1.521 -29.559 20.6012
std. dev. 0.0002 0.1365 3.60E+33 0.6748 0.0289 28.8726
RCGA average 6.9638 2.3127 -1.3E+20 -3.3678 -10.2002 7440.466
-UNDX std. dev. 0.3607 0.1817 2.10E+20 0.0231 0.5695 2193.41
CMA-ES average 8.142 1.1979 -1.1E+29 -2.6016 -19.1408 319.3721
std. dev. 5.7645 0.247 4.40E+29 1.5276 2.0299 102.4076
DE average 1.4523 3.6819 -9E+29 -2.1663 -24.8678 830.2062
std. dev. 0.076 0.2572 1.60E+30 0.1757 0.418 15.6812
ODE average 0.9107 0.4718 -1.2E+24 -3.5000 -14.7206 766.9481
std. dev. 0.0418 0.1056 7.10E+24 0.0000 1.0599 22.6609
DEahcSPX average 2.4177 0.4953 -1.9E+24 -3.3078 -16.9775 9536.837
std. dev. 0.8642 0.1756 1.20E+25 0.3708 2.3975 38054.29
DPSO average 4.8472 2.9157 -2.7E+24 -1.8368 -13.6114 11716.56
std. dev. 0.8796 0.6078 4.10E+12 0.7575 0.965 77.5304
SEPSO average 3.3681 3.3948 -2.1E+25 -2.3102 -14.0837 11007.86
std. dev. 0.7988 0.7648 8.60E+12 0.8684 1.1034 81.4433
EDA average 7.629 5.186 -1.24E+22 -1.222 -10.781 985056.31
std. dev. 0.5443 1.0724 8.90E+22 0.2879 0.7085 769131.6
HisEA average 1.0000 0.1859 -6.25E+34 -3.5000 -28.4301 13.5723
std.dev. 0.0000 0.0351 9.93E+30 0.0000 0.0179 11.8222
TABLE IV: Average, Standard Deviation of the Best Fitness values for .

Iv-B1 Test functions

30 well-known real valued functions are used to evaluate the performance of HisEA in this paper. The test functions, the numbers of dimensions, and the ranges of search are as follows.

(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)