A Modular Hybridization of Particle Swarm Optimization and Differential Evolution

06/21/2020 ∙ by Rick Boks, et al. ∙ universiteit leiden Laboratoire d'Informatique de Paris 6 11

In swarm intelligence, Particle Swarm Optimization (PSO) and Differential Evolution (DE) have been successfully applied in many optimization tasks, and a large number of variants, where novel algorithm operators or components are implemented, has been introduced to boost the empirical performance. In this paper, we first propose to combine the variants of PSO or DE by modularizing each algorithm and incorporating the variants thereof as different options of the corresponding modules. Then, considering the similarity between the inner workings of PSO and DE, we hybridize the algorithms by creating two populations with variation operators of PSO and DE respectively, and selecting individuals from those two populations. The resulting novel hybridization, called PSODE, encompasses most up-to-date variants from both sides, and more importantly gives rise to an enormous number of unseen swarm algorithms via different instantiations of the modules therein. In detail, we consider 16 different variation operators originating from existing PSO- and DE algorithms, which, combined with 4 different selection operators, allow the hybridization framework to generate 800 novel algorithms. The resulting set of hybrid algorithms, along with the combined 30 PSO- and DE algorithms that can be generated with the considered operators, is tested on the 24 problems from the well-known COCO/BBOB benchmark suite, across multiple function groups and dimensionalities.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

In this paper, we delve into two naturally-inspired algorithms, Particle Swarm Optimization (PSO) (Eberhart and Kennedy, 1995) and Differential Evolution (DE) (Storn and Price, 1995) for solving continuous black-box optimization problems , which is subject to minimization without loss of generality. Here we only consider simple box constraints on , meaning the search space is a hyper-box .

In the literature, a huge number of variants of PSO and DE has been proposed to enhance the empirical performance of the respective algorithms. Despite the empirical success of those variants, we, however, found that most of them only differ from the original PSO/DE in one or two operators (e.g., the crossover), where usually some simple modifications are implemented. Therefore, it is almost natural for us to consider combinations of those variants. Following the so-called configurable CMA-ES approach (van Rijn et al., 2016, 2017), we first modularize both PSO and DE algorithms, resulting in a modular framework where different types of algorithmic modules are applied sequentially in each generation loop. When incorporating variants into this modular framework111The source code is available at https://github.com/rickboks/pso-de-framework., we first identify the modules at which modifications are made in a particular variant, and then treat the modifications as options of the corresponding modules. For instance, the so-called inertia weight (Shi and Eberhart, 1998), that is a simple modification to the velocity update in PSO, shall be considered as an option of the velocity update module.

This treatment allows for combining existing variants of either PSO or DE and generating non-existing algorithmic structures. It, in the loose sense, creates a space/family of swarm algorithms, which is configurable via instantiating the modules, and hence potentially primes the application of algorithm selection/configuration (Thornton et al., 2013) to swarm intelligence. More importantly, we also propose a meta-algorithm called PSODE that hybridizes the variation operators from both PSO and DE, and therefore gives rise to an even larger space of unseen algorithms. By hybridizing PSO and DE, we aim to unify the strengths from both sides, in an attempt to, for instance, improve the population diversity and the convergence rate. On the well-known Black-Box Optimization Benchmark (BBOB) (Hansen et al., 2016) problem set, we extensively tested all combinations of four different velocity updates (PSO), five neighborhood topologies (PSO), two crossover operators (DE), five mutation operators (DE), and four selection operators, leading up to algorithms. We benchmark those algorithms on all test functions from the BBOB problem set and analyze the experimental results using the so-called IOHprofiler (Doerr et al., 2019), to identify algorithms that perform well on (a subset of) the 24 test functions.

This paper is organized as follows: Section 2 summarizes the related work. Section 3 reviews the state-of-the-art variants of PSO. Section 4 covers various cutting-edge variants of DE. In Section 5, we describe the novel modular PSODE algorithm. Section 6 specifies the experimental setup on the BBOB problem set. We discuss the experimental results in Section 7 and finally provide, in Section 8, the insights obtained in this paper as well as future directions.

2. Related Work

A hybrid PSO/DE algorithm has been coined previously (Wen-Jun Zhang and Xiao-Feng Xie, 2003) to improve the population diversity and prevent premature convergence. This is attempted by using the DE mutation instead of the traditional velocity- and position-update to evolve candidate solutions in the PSO algorithm. This mutation is applied to the particle’s best-found solution rather than its current position , resulting in a steady-state strategy. Another approach (Hendtlass, 2001) follows the conventional PSO algorithm, but occasionally applies the DE operator in order to escape local minima. Particles maintain their velocity after being permuted by the DE operator. Other PSO/DE hybrids include a two-phase approach (Pant et al., 2008) and a Bare-Bones PSO variant based on DE (Omran et al., 2007), which requires little parameter tuning.

This work follows the approach of the modular and extensible CMA-ES framework proposed in (van Rijn et al., 2016), where many ES-structures

can be instantiated by arbitrarily combining existing variations of the CMA-ES. The authors of this work implement a Genetic Algorithm to efficiently evolve the ES structures, instead of performing an expensive brute force search over all possible combinations of operators.

3. Particle Swarm Optimization

As introduced by Eberhart and Kennedy (Eberhart and Kennedy, 1995), Particle Swarm Optimization (PSO) is an optimization algorithm that mimics the behaviour of a flock of birds foraging for food. A particle in a swarm of size

is associated with three vectors: the current position

, velocity , and its previous best position , where . After the initialization of and , where is initialized randomly and is set to , the algorithm iteratively controls the velocity for each particle (please see the next subsection) and moves the particle accordingly:

(1)

To prevent the velocity from exploding, is kept in the range ( is a vector containing all ones). After every position update, the current position is evaluated, . Here, stands for the best solution found by (thus personal best) while is used to track the best solution found in the neighborhood of (thus global best). Typically, the termination of PSO can be determined by simple termination criteria, such as the depletion of the function evaluation budget, as well as more complicated ones that reply on the convergence behavior, e.g., detecting whether the average distance between particles has gone below a predetermined threshold. The pseudo-code is given in Alg. 1.

1:for  do
2:     
3:      Initialize
4:end for
5:while termination criteria are not met do
6:     for  do
7:          Evaluate
8:         if  then
9:               Update personal best
10:         end if
11:         if  then
12:               Update global best
13:         end if
14:         Calculate according to Eq. (2)
15:          Update position
16:     end for
17:end while
Algorithm 1 Original Particle Swarm Optimization

3.1. Velocity Updating Strategies

As proposed in the original paper (Eberhart and Kennedy, 1995), the velocity vector in original PSO is updated as follows:

(2)

where stands for a continuous uniform random vector with each component distributed uniformly in the range , and is component-wise multiplication. Note that, henceforth the parameter settings such as will be specified in the experimentation part (Section 6). As discussed before, velocities resulting from Eq. (2) have to be clamped in range . Alternatively, the inertia weight (Shi and Eberhart, 1998) is introduced to moderate the velocity update without using :

(3)

A large value of will result in an exploratory search, while a small value leads to a more exploitative behavior. It is suggested to decrease the inertia weight over time as it is desirable to scale down the explorative effect gradually. Here, we consider the inertia method with fixed as well as decreasing weights.

Instead of only being influenced by the best neighbor, the velocity of a particle in the Fully Informed Particle Swarm (FIPS) (Mendes et al., 2004) is updated using the best previous positions of all its neighbors. The corresponding equation is:

(4)

where is the number of neighbors of particle and . Finally, the so-called Bare-Bones PSO (Kennedy, 2003) is a completely different approach in the sense that velocities are not used at all and instead every component () of position

is sampled from a Gaussian distribution with mean

and variance

, where and are the th component of and , respectively:

(5)

3.2. Population Topologies

Five different topologies from the literature have been implemented in the framework:

  • lbest (local best) (Eberhart and Kennedy, 1995) takes a ring topology and each particle is only influenced by its two adjacent neighbors.

  • gbest (global best) (Eberhart and Kennedy, 1995) uses a fully connected graph and thus every particle is influenced by the best particle of the entire swarm.

  • In the Von Neumann topology (Kennedy and Mendes, 2002), particles are arranged in a two-dimensional array and have four neighbors: the ones horizontally and vertically adjacent to them, with toroidal wrapping.

  • The increasing topology (Suganthan, 1999) starts with an lbest topology and gradually increases the connectivity so that, by the end of the run, the particles are fully connected.

  • The dynamic multi-swarm topology (DMS-PSO) (Liang and Suganthan, 2005) creates clusters consisting of three particles each, and creates new clusters randomly after every iterations. If the population size is not divisible by three, every cluster has size three, except one, which is of size .

4. Differential Evolution

Differential Evolution (DE) is introduced by Storn and Price in 1995 (Storn and Price, 1995) and uses scaled differential vectors between randomly selected individuals for perturbing the population. The pseudo-code of DE is provided in Alg. 3.

After the initialization of the population (please see the next subsection) ( is again the swarm size), for each individual , a donor vector (a.k.a. mutant) is generated according to:

(6)

where three distinct indices are chosen uniformly at random (u.a.r.). Here is a scalar value called the mutation rate and is referred as the base vector. Afterwards, a trial vector is created by means of crossover.

In the so-called binomial crossover, each component () of is copied from

with a probability

(a.k.a. crossover rate), or when equals an index chosen u.a.r.:

(7)

In exponential crossover, two integers , are chosen. The integer acts as the starting point where the exchange of components begins, and is chosen uniformly at random. represents the number of elements that will be inherited from the donor vector, and is chosen using Algorithm 2.

1:
2:do
3:     
4:while and
Algorithm 2 Assigning a value to

The trial vector is generated as:

(8)

The angular brackets denote the modulo operator with modulus . Elitism selection is applied between and , where the better one is kept for the next iteration.

1: Initialize
2:while termination criteria are not met do
3:     for  do
4:         Choose u.a.r.
5:          Mutate
6:         Choose u.a.r.
7:         for  do
8:              if  or  then
9:                  
10:              else
11:                  
12:              end if
13:         end for
14:         if  then
15:               Select
16:         end if
17:     end for
18:end while
Algorithm 3 Differential Evolution using Binomial Crossover

4.1. Mutation

In addition to the so-called DE/rand/1 mutation operator (Eq. 6), we also consider the following variants:

  1. DE/best/1 (Storn and Price, 1995): the base vector is chosen as the current best solution in the population :

  2. DE/best/2 (Storn and Price, 1995): two differential vectors calculated using four distinct solutions are scaled and combined with the current best solution:

  3. DE/Target-to-best/1 (Storn and Price, 1995): the base vector is chosen as the solution on which the mutation will be applied and the difference from the current best to this solution is used as one of the differential vectors:

  4. Target-to-best/1 (Jingqiao Zhang and Sanderson, 2007): the same as above except that we take instead of the current best a solution that is randomly chosen from the top 100 solutions in the population with .

  5. DE/2-Opt/1 (Chiang et al., 2010):

4.2. Self-Adaptation of Control Parameters

The performance of the DE algorithm is highly dependent on values of the parameters and , for which the optimal values are in turn dependent on the optimization problem at hand. The self-adaptive DE variant JADE (Jingqiao Zhang and Sanderson, 2007) has been proposed in desire to control the parameters in a self-adaptive manner, without intervention of the user. This self-adaptive parameter scheme is used in both DE and hybrid algorithm instances.

5. Hybridizing PSO with DE

Here, we propose a hybrid algorithm framework called PSODE, that combines the mutation- and crossover operators from DE with the velocity- and position updates from PSO. This implementation allows combinations of all operators mentioned earlier, in a single algorithm, creating the potential for a large number of possible hybrid algorithms. We list the pseudo-code of PSODE in Alg. 4, which works as follows.

  1. The initial population ( stands for the swarm size) is sampled uniformly at random in the search space, and the corresponding velocity vectors are initialized to zero (as suggested in (Engelbrecht, 2012)).

  2. After evaluating , we create by applying the PSO position update to each solution in .

  3. Similarly, is created by applying the DE mutation to each solution in .

  4. Then, a population of size is generated by recombining information among the solutions in and , based on the DE crossover.

  5. Finally, a new population is generated by selecting good solutions from and (please see below).

Four different selection methods are considered in this work, two of which are elitist, and two non-elitist. A problem arises during the selection procedure: solutions from have undergone the mutation and crossover of DE that alters their positions but ignores the velocity thereof, leading to an unmatched pair of positions and velocities. In this case, the velocities that these particles have inherited from may no longer be meaningful, potentially breaking down the inner workings of PSO in the next iteration. To solve this issue, we propose to re-compute the velocity vector according to the displacement of a particle resulting from mutation and crossover operators, namely:

(9)

where is generated by using aforementioned procedure.

A selection operator is required to select particles from , , and for the next generation. Note that is not considered in the selection procedure, as the solution vectors in this population were recombined and stored in . We have implemented four different selection methods: two of those methods only consider population , resulting from variation operators of PSO, and population , obtained from variation operators of DE. This type of selection methods is essentially non-elitist allowing for deteriorations. Alternatively, the other two methods implement elitism by additionally taking population into account.

We use the following naming scheme for the selection methods:

Using this scheme, we can distinguish the four selection methods: pairwise/2, pairwise/3, union/2, and union/3. The “pairwise” comparison method means that the -th members (assuming the solutions are indexed) of each considered population are compared to each other, from which we choose the best one for the next generation. The “union” method selects the best solutions from the union of the considered populations. Here, a “2” signals the inclusion of two populations, and , and a “3” indicates the further inclusion of . For example, the pairwise/2 method selects the best individual from each pair of and , while the union/3 method selects the best individuals from .

1:Sample uniformly at random in
2:Initialize velocities .
3:while termination criteria are not met do
4:     
5:     for  with its corresponding velocity  do
6:         
7:         
8:         Evaluate on
9:         
10:     end for
11:     
12:     for  do
13:         
14:         
15:     end for
16:     
17:     for  do
18:         
19:         calculate for using Eq. 9
20:         Evaluate on
21:         
22:         
23:     end for
24:     
25:end while
Algorithm 4 PSODE

6. Experiment

A software framework has been implemented in C++ to generate PSO, DE and PSODE instances from all aforementioned algorithmic modules, e.g. topologies and mutation strategies. Such a framework is tested on IOHprofiler, which contains the functions from BBOB/COCO (Hansen et al., 2016) that are organized in five function groups: 1) Separable functions 2) Functions with low or moderate conditioning 3) Unimodal functions with high conditioning 4) Multi-modal functions with adequate global structure and 5) Multi-modal functions with weak global structure.

In the experiments conducted, a PSODE instance is considered as a combination of five modules: velocity update strategy, population topology, mutation method, crossover method, and selection method. Combining each option for each of these five modules, we obtain a total of different PSODE instances.

By combining the velocity update strategies and topologies, we obtain PSO instances, and similarly we obtain DE instances.

Naming Convention of Algorithm Instances

As each PSO, DE, and hybrid instance can be specified by the composing modules, it is named using the abbreviations of its modules: hybrid instances are named as follows:

H_[velocity strategy]_[topology]_[mutation]

_[crossover]_[selection]

PSO instances are named as:

P_[velocity strategy]_[topology]

And DE instances are named as:

D_[mutation]_[crossover]

Options of all modules are listed in Table 1.

Experiment Setup

The following parameters are used throughout the experiment:

  • Function evaluation budget: .

  • Population (swarm) size: is used for all algorithm instances, due to the relatively consistent performance that instances show across different function groups and dimensionalities when using this value.

  • Hyperparameters in PSO: In Eq. (2) and (3), is taken as recommended in (Clerc and Kennedy, 2002) and for FIPS (Eq. (4)), a setting is adopted from (Mendes et al., 2004). In the fixed inertia strategy, is set to while in the decreasing inertia strategy, is linearly decreased from to . For the Target-to-best/1 mutation scheme, a value of is chosen, following the findings of (Jingqiao Zhang and Sanderson, 2007).

  • Hyperparameters in DE: and are managed by the JADE self-adaptation scheme.

  • Number of independent runs per function: . Note that only one function instance (instance “1”) is used for each function.

  • Performance measure: expected running time (ERT) (Price, 1997), which is the total number of function evaluations an algorithm is expected to use to reach a given target function value for the first time. ERT is defined as , where denotes the total number of function evaluations taken to hit in all runs, while might not be reached in every run, and denotes the number of successful runs.

To present the result, we rank the algorithm instances with regard to their ERT values. This is done by first ranking the instances on the targets of every benchmark function, and then taking the average rank across all targets per function. Finally, the presented rank is obtained by taking the average rank over all test functions. This is done for both dimensionalities. A dataset containing the running time for each independent run and ERT’s for each algorithm instance, with the supporting scripts, are available at (Boks et al., 2020).

[velocity strategy] [mutation]
B – Bare-Bones PSO B1 – DE/best/1
F – Fully-informed PSO (FIPS) B2 – DE/best/2
I – Inertia weight T1 – DE/target-to-best/1
D – Decreasing inertia weight PB – DE/target-to-best/1
[crossover] O1 – 2-Opt/1
B – Binomial crossover [selection]
E – Exponential crossover U2 – Union/2
[topology] U3 – Union/3
L (ring) P2 – Pairwise/2
G (fully connected) P3 – Pairwise/3
N – Von Neumann
I – Increasing connectivity
M – Dynamic multi-swarm
Table 1. Module options and codings of velocity strategy, crossover, initialization, topology, and mutation.

7. Results

Figure 1

depicts the Empirical Cumulative Distribution Functions (ECDF) of the top-

highest ranked algorithm instances in both -D and -D. Due to overlap, only algorithms are shown. Tables 2 and 3

show the the Estimated Running Times of the 10 highest ranked instances, and the 10 ranked in the middle in

-D and -D, respectively. ERT values are normalized using the corresponding ERT values of the state-of-the-art Covariance Matrix Adaptation Evolution Strategy (CMA-ES).

Though many more PSODE instances were tested, DE instances generally showed the best performance in both 5-D and 20-D. All PSO instances were outperformed by DE and many PSODE instances. This is no complete surprise, as several studies (e.g. in (Vesterstrom and Thomsen, 2004; Iwan et al., 2012)) demonstrated the relative superiority of DE over PSO.

Looking at the ranked algorithm instances, it is clear to see that some modules are more successful than others. The (decreasing) inertia weight velocity update strategies are dominant among the top-performing algorithms, as well as pairwise/3 selection and binomial crossover. Target-to-best/1 mutation is most successful in 5-D while target-to-best/1 seems a better choice in 20-D. This is surprising, as one may expect the less greedy target-to-best/1 mutation to be more beneficial in higher-dimensional search spaces, where it is increasingly difficult to avoid getting stuck in local optima. The best choice of selection method is convincingly pairwise/3. This seems to be one of the most crucial modules for the PSODE algorithm, as most instances with any other selection method show considerably worse performance. This seemingly high importance of an elitist strategy suggests that the algorithm’s convergence with non-elitist selection is too slow, which could be due to the application of two different search strategies. The instances H_I_*_PB_B_P3 and H_I_*_T1_B_P3 appear to be the most competitive PSODE instances, with the topology choice having little influence on the observed performance. The most highly ranked DE instances are DE_T1_B and D_PB_B, in both dimensionalities. Binomial crossover seems superior to the exponential counterpart, especially in 20 dimensions.

Interestingly, the PSODE and PSO algorithms “prefer” different module options. As an example, the Fully Informed Particle Swarm works well on PSO instances, but PSODE instances perform better with the (decreasing) inertia weight. Bare-Bones PSO showed the overall poorest performance of the four velocity update strategies.

Notable is the large performance difference between the worst and best generated algorithm instances. Some combinations of modules, as to be expected while arbitrarily combining operators, show very poor performance, failing to solve even the most trivial problems. This stresses the importance of proper module selection.

8. Conclusion and Future Work

We implement an extensible and modular hybridization of PSO and DE, called PSODE, in which a large number of variants from both PSO and DE is incorporated as module options. Interestingly, a vast number of unseen swarm algorithms can be easily instantiated from this hybridization, paving the way for designing and selecting appropriate swarm algorithms for specific optimization tasks. In this work, we investigate, on benchmark functions from BBOB, PSO variants, DE variants, and PSODE instances resulting from combining the variants of PSO and DE, where we identify some promising hybrid algorithms that surpass PSO but fail to outperform the best DE variants, on subsets of BBOB problems. Moreover, we obtained insights into suitable combinations of algorithmic modules. Specifically, the efficacy of the target-to-()best mutation operators, the (decreasing) inertia weight velocity update strategies, and binomial crossover was demonstrated. On the other hand, some inefficient operators, such as Bare-Bones PSO, were identified. The neighborhood topology appeared to have the least effect on the observed performance of the hybrid algorithm.

The future work lies in extending the hybridization framework. Firstly, we are planning to incorporate the state-of-the-art PSO and DE variants as much as possible. Secondly, we shall explore alternative ways of combining PSO and DE. Lastly, it is worthwhile to consider the problem of selecting a suitable hybrid algorithm for an unseen optimization problem, taking the approach of automated algorithm selection.

Figure 1. Empirical Cumulative Distribution Functions (ECDFs) of the top- ranked algorithms in both and for each function group defined in BBOB (Hansen et al., 2016). ECDFs are aggregated over target values and the ranking is in accordance with Table 2 and 3. Note that only eight algorithms appear here since two algorithms are simultaneously among the top five in both and .
Algorithm Instance F1 F2 F6 F8 F11 F12 F17 F18 F21
rank CMA-ES 658.933 2138.400 1653.667 2834.714 2207.400 5456.867 9248.600 13745.867 74140.538
1 D_T1_B 2.472 1.175 2.261 3.177 1.640 2.362 1.907 9.397 0.592
2 D_PB_B 2.546 1.213 2.321 4.031 1.643 2.580 1.258 5.324 1.072
3 D_PB_E 3.176 1.483 3.635 5.152 1.700 2.750 1.584 4.350 0.305
4 D_T1_E 3.060 1.477 3.583 3.670 1.660 2.281 2.036 9.112 0.352
5 D_O1_B 3.152 1.466 3.717 4.155 6.360 8.818 1.445 8.405 0.383
6 H_I_I_PB_E_P3 3.911 1.830 3.817 3.724 2.951 3.055 3.301 3.021 0.519
7 H_I_I_PB_B_P3 3.685 1.694 3.117 3.115 2.912 3.047 2.102 3.222 1.063
8 H_I_G_PB_B_P3 3.138 1.473 2.813 5.656 2.968 3.099 4.684 3.507 2.251
9 H_I_I_T1_B_P3 3.599 1.700 3.155 5.106 2.837 2.670 2.914 3.975 0.727
10 H_I_N_PB_B_P3 3.480 1.650 3.100 5.061 2.852 2.932 2.453 3.213 1.064
411 H_I_N_PB_B_P2 4.761 2.268 4.744 12.933 3.113 53.561 2.738
412 H_D_N_T1_E_U3 29.656 38.499 22.459 25.214 5.091 9.053 22.333 8.645 1.247
413 H_B_L_B2_E_U3 25.515 13.345 91.998 10.758 4.203 5.516 16.277 0.960
414 H_F_L_O1_E_U3 19.585 9.980 94.563 18.771 5.529 12.265 161.662 7.416 3.586
415 H_B_G_T1_E_P3 4.736 2.288 6.532 10.503 45.093 2.808 36.108 3.474
416 H_B_N_B1_B_U2 6.531 3.029 8.313 6.918 93.749 13.117 28.817 19.629
417 H_D_I_T1_E_P2 5.506 2.545 5.917 12.812 7.791 34.691 3.433
418 H_D_M_O1_E_U3 21.270 10.963 33.571 12.992 5.882 7.250 12.577 5.760 1.192
419 H_B_G_O1_B_P2 4.091 1.764 4.959 157.845 2.253
420 H_F_L_T1_E_U3 26.450 15.383 17.706 12.174 4.609 9.334 16.892 53.541 1.822
Table 2. On , the normalized Expected Running Time (ERT) values of the top- ranked and algorithms ranked in the middle among all algorithms. The ranking is firstly determining on each test problem with respect to ERT and then averaged over all test problem. For the reported ERT values, the target is used. All ERT values are normalized per problem with respect to a reference CMA-ES, shown in the first row of algorithms.
Algorithm Instance F1 F2 F6 F8 F11 F12 F17 F18 F21
rank CMA-ES 830.800 16498.533 4018.600 19140.467 12212.267 15316.733 5846.400 17472.333 801759
1 D_T1_B 7.377 0.864 5.912 3.702 2.678 4.699 3.144 3.604 0.385
2 D_PB_B 7.731 0.901 6.884 6.766 3.833 5.999 3.158 1.719 0.193
3 H_I_I_T1_B_P3 10.988 1.195 7.894 4.153 6.596 7.656 3.988 3.081 0.298
4 H_I_M_T1_B_P3 12.621 1.434 9.714 5.296 8.389 8.152 4.979 3.138 0.186
5 H_I_L_T1_B_P3 11.402 1.299 9.271 5.146 8.170 7.422 4.771 3.406 0.341
6 H_I_N_T1_B_P3 10.641 1.202 8.218 4.705 7.253 7.928 4.325 2.741 0.338
7 H_D_M_T1_B_P3 12.865 1.476 10.100 6.036 8.119 8.768 5.345 3.450 0.354
8 D_B2_B 7.983 0.885 5.862 10.401 6.455 9.258 44.240 0.829
9 H_D_G_T1_B_P3 9.031 1.074 7.910 4.419 5.690 8.078 4.079 7.838 0.695
10 H_D_N_T1_B_P3 11.307 1.287 9.057 4.801 9.854 5.949 4.517 4.288 0.303
411 H_D_L_T1_B_U2 39.225 6.262 312.925 35.178 0.728
412 H_F_M_T1_B_U2 55.045 5.655 34.213 0.360
413 H_B_M_T1_E_P2 39.181 4.393 41.771 0.369
414 P_F_N 53.733 1480.838 88.421 0.163
415 H_I_M_T1_B_U2 40.014 7.379 313.468 35.252 0.546
416 H_I_N_PB_B_U3 70.776 362.611 86.426 18.979 339.045 113.442 0.433
417 H_I_M_B1_E_P2 33.073 3.734 72.629 35.327 0.876
418 H_I_G_B2_B_U2 43.424 8.498 104.122 7.367
419 H_B_G_PB_B_U2 41.308 16.007 50.786 1.054
420 H_B_N_PB_B_P3 33.984 4.203 32.929 1.314
Table 3. On , the normalized Expected Running Time (ERT) values of the top- ranked and algorithms ranked in the middle among all algorithms. The ranking is firstly determining on each test problem with respect to ERT and then averaged over all test problem. For the reported ERT values, the target is used. All ERT values are normalized per problem with respect to a reference CMA-ES, shown in the first row of algorithms.

Acknowledgments

Hao Wang acknowledges the support from the Paris Île-de-France Region.

References

  • R. Boks, H. Wang, and T. Bäck (2020) Cited by: §6.
  • C. Chiang, W. Lee, and J. Heh (2010) A 2-opt based differential evolution for global optimization. Applied Soft Computing 10 (4), pp. 1200 – 1207. Note: Optimisation Methods & Applications in Decision-Making Processes External Links: ISSN 1568-4946, Document, Link Cited by: item 5.
  • M. Clerc and J. Kennedy (2002) The particle swarm - explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6 (1), pp. 58–73. External Links: Document, ISSN 1089-778X Cited by: 3rd item.
  • C. Doerr, F. Ye, N. Horesh, H. Wang, O. M. Shir, and T. Bäck (2019)

    Benchmarking discrete optimization heuristics with iohprofiler

    .
    Applied Soft Computing, pp. 106027. Cited by: §1.
  • R. Eberhart and J. Kennedy (1995) A new optimizer using particle swarm theory. Proceedings of the sixth international symposium on micro machine and human science, pp. 39––43. Cited by: §1, 1st item, 2nd item, §3.1, §3.
  • A. Engelbrecht (2012) Particle swarm optimization: velocity initialization. In 2012 IEEE Congress on Evolutionary Computation, Vol. , pp. 1–8. External Links: Document, ISSN 1941-0026 Cited by: item 1.
  • N. Hansen, A. Auger, O. Mersmann, T. Tušar, and D. Brockhoff (2016) COCO: a platform for comparing continuous optimizers in a black-box setting. ArXiv e-prints arXiv:1603.08785. Cited by: §1, §6, Figure 1.
  • T. Hendtlass (2001) A combined swarm differential evolution algorithm for optimization problems. In

    Proceedings of the 14th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems: Engineering of Intelligent Systems

    ,
    IEA/AIE ’01, Berlin, Heidelberg, pp. 11–18. External Links: ISBN 3540422196 Cited by: §2.
  • M. Iwan, R. Akmeliawati, T. Faisal, and H. M.A.A. Al-Assadi (2012) Performance comparison of differential evolution and particle swarm optimization in constrained optimization. Procedia Engineering 41, pp. 1323 – 1328. Note: International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) External Links: ISSN 1877-7058, Document, Link Cited by: §7.
  • Jingqiao Zhang and A. C. Sanderson (2007) JADE: self-adaptive differential evolution with fast and reliable convergence performance. In 2007 IEEE Congress on Evolutionary Computation, Vol. , pp. 2251–2258. External Links: Document, ISSN Cited by: item 4, §4.2, 3rd item.
  • J. Kennedy and R. Mendes (2002) Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Vol. 2, pp. 1671–1676 vol.2. External Links: Document, ISSN Cited by: 3rd item.
  • J. Kennedy (2003) Bare bones particle swarms. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03 (Cat. No.03EX706), Vol. , pp. 80–87. External Links: Document, ISSN null Cited by: §3.1.
  • J. J. Liang and P. N. Suganthan (2005) Dynamic multi-swarm particle swarm optimizer. In Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005., Vol. , pp. 124–129. External Links: Document, ISSN Cited by: 5th item.
  • R. Mendes, J. Kennedy, and J. Neves (2004) The fully informed particle swarm: simpler, maybe better. IEEE Transactions on Evolutionary Computation 8 (3), pp. 204–210. External Links: Document, ISSN 1941-0026 Cited by: §3.1, 3rd item.
  • M. G. H. Omran, A. P. Engelbrecht, and A. Salman (2007) Differential evolution based particle swarm optimization. In 2007 IEEE Swarm Intelligence Symposium, Vol. , pp. 112–119. External Links: Document, ISSN null Cited by: §2.
  • M. Pant, R. Thangaraj, C. Grosan, and A. Abraham (2008) Hybrid differential evolution - particle swarm optimization algorithm for solving global optimization problems. In 2008 Third International Conference on Digital Information Management, Vol. , pp. 18–24. External Links: Document, ISSN null Cited by: §2.
  • K. V. Price (1997) Differential evolution vs. the functions of the 2/sup nd/ iceo. In Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC ’97), Vol. , pp. 153–157. External Links: Document, ISSN null Cited by: 6th item.
  • Y. Shi and R. Eberhart (1998) A modified particle swarm optimizer. In 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Vol. , pp. 69–73. External Links: Document, ISSN Cited by: §1, §3.1.
  • R. Storn and K. Price (1995) Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. Journal of Global Optimization 23, pp. . Cited by: §1, item 1, item 2, item 3, §4.
  • P. N. Suganthan (1999) Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Vol. 3, pp. 1958–1962 Vol. 3. External Links: Document, ISSN Cited by: 4th item.
  • C. Thornton, F. Hutter, H. H. Hoos, and K. Leyton-Brown (2013) Auto-weka: combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, New York, NY, USA, pp. 847–855. External Links: ISBN 9781450321747, Link, Document Cited by: §1.
  • S. van Rijn, H. Wang, M. van Leeuwen, and T. Bäck (2016) Evolving the structure of evolution strategies. In 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. , pp. 1–8. External Links: Document, ISSN Cited by: §1, §2.
  • S. van Rijn, H. Wang, B. van Stein, and T. Bäck (2017) Algorithm configuration data mining for cma evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, New York, NY, USA, pp. 737–744. External Links: ISBN 978-1-4503-4920-8, Link, Document Cited by: §1.
  • J. Vesterstrom and R. Thomsen (2004)

    A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems

    .
    In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), Vol. 2, pp. 1980–1987 Vol.2. External Links: Document, ISSN null Cited by: §7.
  • Wen-Jun Zhang and Xiao-Feng Xie (2003) DEPSO: hybrid particle swarm with differential evolution operator. In SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483), Vol. 4, pp. 3816–3821 vol.4. External Links: Document, ISSN 1062-922X Cited by: §2.