1. Introduction
In this paper, we delve into two naturallyinspired algorithms, Particle Swarm Optimization (PSO) (Eberhart and Kennedy, 1995) and Differential Evolution (DE) (Storn and Price, 1995) for solving continuous blackbox optimization problems , which is subject to minimization without loss of generality. Here we only consider simple box constraints on , meaning the search space is a hyperbox .
In the literature, a huge number of variants of PSO and DE has been proposed to enhance the empirical performance of the respective algorithms. Despite the empirical success of those variants, we, however, found that most of them only differ from the original PSO/DE in one or two operators (e.g., the crossover), where usually some simple modifications are implemented. Therefore, it is almost natural for us to consider combinations of those variants. Following the socalled configurable CMAES approach (van Rijn et al., 2016, 2017), we first modularize both PSO and DE algorithms, resulting in a modular framework where different types of algorithmic modules are applied sequentially in each generation loop. When incorporating variants into this modular framework^{1}^{1}1The source code is available at https://github.com/rickboks/psodeframework., we first identify the modules at which modifications are made in a particular variant, and then treat the modifications as options of the corresponding modules. For instance, the socalled inertia weight (Shi and Eberhart, 1998), that is a simple modification to the velocity update in PSO, shall be considered as an option of the velocity update module.
This treatment allows for combining existing variants of either PSO or DE and generating nonexisting algorithmic structures. It, in the loose sense, creates a space/family of swarm algorithms, which is configurable via instantiating the modules, and hence potentially primes the application of algorithm selection/configuration (Thornton et al., 2013) to swarm intelligence. More importantly, we also propose a metaalgorithm called PSODE that hybridizes the variation operators from both PSO and DE, and therefore gives rise to an even larger space of unseen algorithms. By hybridizing PSO and DE, we aim to unify the strengths from both sides, in an attempt to, for instance, improve the population diversity and the convergence rate. On the wellknown BlackBox Optimization Benchmark (BBOB) (Hansen et al., 2016) problem set, we extensively tested all combinations of four different velocity updates (PSO), five neighborhood topologies (PSO), two crossover operators (DE), five mutation operators (DE), and four selection operators, leading up to algorithms. We benchmark those algorithms on all test functions from the BBOB problem set and analyze the experimental results using the socalled IOHprofiler (Doerr et al., 2019), to identify algorithms that perform well on (a subset of) the 24 test functions.
This paper is organized as follows: Section 2 summarizes the related work. Section 3 reviews the stateoftheart variants of PSO. Section 4 covers various cuttingedge variants of DE. In Section 5, we describe the novel modular PSODE algorithm. Section 6 specifies the experimental setup on the BBOB problem set. We discuss the experimental results in Section 7 and finally provide, in Section 8, the insights obtained in this paper as well as future directions.
2. Related Work
A hybrid PSO/DE algorithm has been coined previously (WenJun Zhang and XiaoFeng Xie, 2003) to improve the population diversity and prevent premature convergence. This is attempted by using the DE mutation instead of the traditional velocity and positionupdate to evolve candidate solutions in the PSO algorithm. This mutation is applied to the particle’s bestfound solution rather than its current position , resulting in a steadystate strategy. Another approach (Hendtlass, 2001) follows the conventional PSO algorithm, but occasionally applies the DE operator in order to escape local minima. Particles maintain their velocity after being permuted by the DE operator. Other PSO/DE hybrids include a twophase approach (Pant et al., 2008) and a BareBones PSO variant based on DE (Omran et al., 2007), which requires little parameter tuning.
This work follows the approach of the modular and extensible CMAES framework proposed in (van Rijn et al., 2016), where many ESstructures
can be instantiated by arbitrarily combining existing variations of the CMAES. The authors of this work implement a Genetic Algorithm to efficiently evolve the ES structures, instead of performing an expensive brute force search over all possible combinations of operators.
3. Particle Swarm Optimization
As introduced by Eberhart and Kennedy (Eberhart and Kennedy, 1995), Particle Swarm Optimization (PSO) is an optimization algorithm that mimics the behaviour of a flock of birds foraging for food. A particle in a swarm of size
is associated with three vectors: the current position
, velocity , and its previous best position , where . After the initialization of and , where is initialized randomly and is set to , the algorithm iteratively controls the velocity for each particle (please see the next subsection) and moves the particle accordingly:(1) 
To prevent the velocity from exploding, is kept in the range ( is a vector containing all ones). After every position update, the current position is evaluated, . Here, stands for the best solution found by (thus personal best) while is used to track the best solution found in the neighborhood of (thus global best). Typically, the termination of PSO can be determined by simple termination criteria, such as the depletion of the function evaluation budget, as well as more complicated ones that reply on the convergence behavior, e.g., detecting whether the average distance between particles has gone below a predetermined threshold. The pseudocode is given in Alg. 1.
3.1. Velocity Updating Strategies
As proposed in the original paper (Eberhart and Kennedy, 1995), the velocity vector in original PSO is updated as follows:
(2) 
where stands for a continuous uniform random vector with each component distributed uniformly in the range , and is componentwise multiplication. Note that, henceforth the parameter settings such as will be specified in the experimentation part (Section 6). As discussed before, velocities resulting from Eq. (2) have to be clamped in range . Alternatively, the inertia weight (Shi and Eberhart, 1998) is introduced to moderate the velocity update without using :
(3) 
A large value of will result in an exploratory search, while a small value leads to a more exploitative behavior. It is suggested to decrease the inertia weight over time as it is desirable to scale down the explorative effect gradually. Here, we consider the inertia method with fixed as well as decreasing weights.
Instead of only being influenced by the best neighbor, the velocity of a particle in the Fully Informed Particle Swarm (FIPS) (Mendes et al., 2004) is updated using the best previous positions of all its neighbors. The corresponding equation is:
(4) 
where is the number of neighbors of particle and . Finally, the socalled BareBones PSO (Kennedy, 2003) is a completely different approach in the sense that velocities are not used at all and instead every component () of position
is sampled from a Gaussian distribution with mean
and variance
, where and are the th component of and , respectively:(5) 
3.2. Population Topologies
Five different topologies from the literature have been implemented in the framework:

lbest (local best) (Eberhart and Kennedy, 1995) takes a ring topology and each particle is only influenced by its two adjacent neighbors.

gbest (global best) (Eberhart and Kennedy, 1995) uses a fully connected graph and thus every particle is influenced by the best particle of the entire swarm.

In the Von Neumann topology (Kennedy and Mendes, 2002), particles are arranged in a twodimensional array and have four neighbors: the ones horizontally and vertically adjacent to them, with toroidal wrapping.

The increasing topology (Suganthan, 1999) starts with an lbest topology and gradually increases the connectivity so that, by the end of the run, the particles are fully connected.

The dynamic multiswarm topology (DMSPSO) (Liang and Suganthan, 2005) creates clusters consisting of three particles each, and creates new clusters randomly after every iterations. If the population size is not divisible by three, every cluster has size three, except one, which is of size .
4. Differential Evolution
Differential Evolution (DE) is introduced by Storn and Price in 1995 (Storn and Price, 1995) and uses scaled differential vectors between randomly selected individuals for perturbing the population. The pseudocode of DE is provided in Alg. 3.
After the initialization of the population (please see the next subsection) ( is again the swarm size), for each individual , a donor vector (a.k.a. mutant) is generated according to:
(6) 
where three distinct indices are chosen uniformly at random (u.a.r.). Here is a scalar value called the mutation rate and is referred as the base vector. Afterwards, a trial vector is created by means of crossover.
In the socalled binomial crossover, each component () of is copied from
with a probability
(a.k.a. crossover rate), or when equals an index chosen u.a.r.:(7) 
In exponential crossover, two integers , are chosen. The integer acts as the starting point where the exchange of components begins, and is chosen uniformly at random. represents the number of elements that will be inherited from the donor vector, and is chosen using Algorithm 2.
The trial vector is generated as:
(8) 
The angular brackets denote the modulo operator with modulus . Elitism selection is applied between and , where the better one is kept for the next iteration.
4.1. Mutation
In addition to the socalled DE/rand/1 mutation operator (Eq. 6), we also consider the following variants:

DE/best/1 (Storn and Price, 1995): the base vector is chosen as the current best solution in the population :

DE/best/2 (Storn and Price, 1995): two differential vectors calculated using four distinct solutions are scaled and combined with the current best solution:

DE/Targettobest/1 (Storn and Price, 1995): the base vector is chosen as the solution on which the mutation will be applied and the difference from the current best to this solution is used as one of the differential vectors:

Targettobest/1 (Jingqiao Zhang and Sanderson, 2007): the same as above except that we take instead of the current best a solution that is randomly chosen from the top 100 solutions in the population with .

DE/2Opt/1 (Chiang et al., 2010):
4.2. SelfAdaptation of Control Parameters
The performance of the DE algorithm is highly dependent on values of the parameters and , for which the optimal values are in turn dependent on the optimization problem at hand. The selfadaptive DE variant JADE (Jingqiao Zhang and Sanderson, 2007) has been proposed in desire to control the parameters in a selfadaptive manner, without intervention of the user. This selfadaptive parameter scheme is used in both DE and hybrid algorithm instances.
5. Hybridizing PSO with DE
Here, we propose a hybrid algorithm framework called PSODE, that combines the mutation and crossover operators from DE with the velocity and position updates from PSO. This implementation allows combinations of all operators mentioned earlier, in a single algorithm, creating the potential for a large number of possible hybrid algorithms. We list the pseudocode of PSODE in Alg. 4, which works as follows.

The initial population ( stands for the swarm size) is sampled uniformly at random in the search space, and the corresponding velocity vectors are initialized to zero (as suggested in (Engelbrecht, 2012)).

After evaluating , we create by applying the PSO position update to each solution in .

Similarly, is created by applying the DE mutation to each solution in .

Then, a population of size is generated by recombining information among the solutions in and , based on the DE crossover.

Finally, a new population is generated by selecting good solutions from and (please see below).
Four different selection methods are considered in this work, two of which are elitist, and two nonelitist. A problem arises during the selection procedure: solutions from have undergone the mutation and crossover of DE that alters their positions but ignores the velocity thereof, leading to an unmatched pair of positions and velocities. In this case, the velocities that these particles have inherited from may no longer be meaningful, potentially breaking down the inner workings of PSO in the next iteration. To solve this issue, we propose to recompute the velocity vector according to the displacement of a particle resulting from mutation and crossover operators, namely:
(9) 
where is generated by using aforementioned procedure.
A selection operator is required to select particles from , , and for the next generation. Note that is not considered in the selection procedure, as the solution vectors in this population were recombined and stored in . We have implemented four different selection methods: two of those methods only consider population , resulting from variation operators of PSO, and population , obtained from variation operators of DE. This type of selection methods is essentially nonelitist allowing for deteriorations. Alternatively, the other two methods implement elitism by additionally taking population into account.
We use the following naming scheme for the selection methods:
Using this scheme, we can distinguish the four selection methods: pairwise/2, pairwise/3, union/2, and union/3. The “pairwise” comparison method means that the th members (assuming the solutions are indexed) of each considered population are compared to each other, from which we choose the best one for the next generation. The “union” method selects the best solutions from the union of the considered populations. Here, a “2” signals the inclusion of two populations, and , and a “3” indicates the further inclusion of . For example, the pairwise/2 method selects the best individual from each pair of and , while the union/3 method selects the best individuals from .
6. Experiment
A software framework has been implemented in C++ to generate PSO, DE and PSODE instances from all aforementioned algorithmic modules, e.g. topologies and mutation strategies. Such a framework is tested on IOHprofiler, which contains the functions from BBOB/COCO (Hansen et al., 2016) that are organized in five function groups: 1) Separable functions 2) Functions with low or moderate conditioning 3) Unimodal functions with high conditioning 4) Multimodal functions with adequate global structure and 5) Multimodal functions with weak global structure.
In the experiments conducted, a PSODE instance is considered as a combination of five modules: velocity update strategy, population topology, mutation method, crossover method, and selection method. Combining each option for each of these five modules, we obtain a total of different PSODE instances.
By combining the velocity update strategies and topologies, we obtain PSO instances, and similarly we obtain DE instances.
Naming Convention of Algorithm Instances
As each PSO, DE, and hybrid instance can be specified by the composing modules, it is named using the abbreviations of its modules: hybrid instances are named as follows:
H_[velocity strategy]_[topology]_[mutation]
_[crossover]_[selection]
PSO instances are named as:
P_[velocity strategy]_[topology] 
And DE instances are named as:
D_[mutation]_[crossover] 
Options of all modules are listed in Table 1.
Experiment Setup
The following parameters are used throughout the experiment:

Function evaluation budget: .

Population (swarm) size: is used for all algorithm instances, due to the relatively consistent performance that instances show across different function groups and dimensionalities when using this value.

Hyperparameters in PSO: In Eq. (2) and (3), is taken as recommended in (Clerc and Kennedy, 2002) and for FIPS (Eq. (4)), a setting is adopted from (Mendes et al., 2004). In the fixed inertia strategy, is set to while in the decreasing inertia strategy, is linearly decreased from to . For the Targettobest/1 mutation scheme, a value of is chosen, following the findings of (Jingqiao Zhang and Sanderson, 2007).

Hyperparameters in DE: and are managed by the JADE selfadaptation scheme.

Number of independent runs per function: . Note that only one function instance (instance “1”) is used for each function.

Performance measure: expected running time (ERT) (Price, 1997), which is the total number of function evaluations an algorithm is expected to use to reach a given target function value for the first time. ERT is defined as , where denotes the total number of function evaluations taken to hit in all runs, while might not be reached in every run, and denotes the number of successful runs.
To present the result, we rank the algorithm instances with regard to their ERT values. This is done by first ranking the instances on the targets of every benchmark function, and then taking the average rank across all targets per function. Finally, the presented rank is obtained by taking the average rank over all test functions. This is done for both dimensionalities. A dataset containing the running time for each independent run and ERT’s for each algorithm instance, with the supporting scripts, are available at (Boks et al., 2020).
[velocity strategy]  [mutation] 

B – BareBones PSO  B1 – DE/best/1 
F – Fullyinformed PSO (FIPS)  B2 – DE/best/2 
I – Inertia weight  T1 – DE/targettobest/1 
D – Decreasing inertia weight  PB – DE/targettobest/1 
[crossover]  O1 – 2Opt/1 
B – Binomial crossover  [selection] 
E – Exponential crossover  U2 – Union/2 
[topology]  U3 – Union/3 
L – (ring)  P2 – Pairwise/2 
G – (fully connected)  P3 – Pairwise/3 
N – Von Neumann  
I – Increasing connectivity  
M – Dynamic multiswarm 
7. Results
Figure 1
depicts the Empirical Cumulative Distribution Functions (ECDF) of the top
highest ranked algorithm instances in both D and D. Due to overlap, only algorithms are shown. Tables 2 and 3show the the Estimated Running Times of the 10 highest ranked instances, and the 10 ranked in the middle in
D and D, respectively. ERT values are normalized using the corresponding ERT values of the stateoftheart Covariance Matrix Adaptation Evolution Strategy (CMAES).Though many more PSODE instances were tested, DE instances generally showed the best performance in both 5D and 20D. All PSO instances were outperformed by DE and many PSODE instances. This is no complete surprise, as several studies (e.g. in (Vesterstrom and Thomsen, 2004; Iwan et al., 2012)) demonstrated the relative superiority of DE over PSO.
Looking at the ranked algorithm instances, it is clear to see that some modules are more successful than others. The (decreasing) inertia weight velocity update strategies are dominant among the topperforming algorithms, as well as pairwise/3 selection and binomial crossover. Targettobest/1 mutation is most successful in 5D while targettobest/1 seems a better choice in 20D. This is surprising, as one may expect the less greedy targettobest/1 mutation to be more beneficial in higherdimensional search spaces, where it is increasingly difficult to avoid getting stuck in local optima. The best choice of selection method is convincingly pairwise/3. This seems to be one of the most crucial modules for the PSODE algorithm, as most instances with any other selection method show considerably worse performance. This seemingly high importance of an elitist strategy suggests that the algorithm’s convergence with nonelitist selection is too slow, which could be due to the application of two different search strategies. The instances H_I_*_PB_B_P3 and H_I_*_T1_B_P3 appear to be the most competitive PSODE instances, with the topology choice having little influence on the observed performance. The most highly ranked DE instances are DE_T1_B and D_PB_B, in both dimensionalities. Binomial crossover seems superior to the exponential counterpart, especially in 20 dimensions.
Interestingly, the PSODE and PSO algorithms “prefer” different module options. As an example, the Fully Informed Particle Swarm works well on PSO instances, but PSODE instances perform better with the (decreasing) inertia weight. BareBones PSO showed the overall poorest performance of the four velocity update strategies.
Notable is the large performance difference between the worst and best generated algorithm instances. Some combinations of modules, as to be expected while arbitrarily combining operators, show very poor performance, failing to solve even the most trivial problems. This stresses the importance of proper module selection.
8. Conclusion and Future Work
We implement an extensible and modular hybridization of PSO and DE, called PSODE, in which a large number of variants from both PSO and DE is incorporated as module options. Interestingly, a vast number of unseen swarm algorithms can be easily instantiated from this hybridization, paving the way for designing and selecting appropriate swarm algorithms for specific optimization tasks. In this work, we investigate, on benchmark functions from BBOB, PSO variants, DE variants, and PSODE instances resulting from combining the variants of PSO and DE, where we identify some promising hybrid algorithms that surpass PSO but fail to outperform the best DE variants, on subsets of BBOB problems. Moreover, we obtained insights into suitable combinations of algorithmic modules. Specifically, the efficacy of the targetto()best mutation operators, the (decreasing) inertia weight velocity update strategies, and binomial crossover was demonstrated. On the other hand, some inefficient operators, such as BareBones PSO, were identified. The neighborhood topology appeared to have the least effect on the observed performance of the hybrid algorithm.
The future work lies in extending the hybridization framework. Firstly, we are planning to incorporate the stateoftheart PSO and DE variants as much as possible. Secondly, we shall explore alternative ways of combining PSO and DE. Lastly, it is worthwhile to consider the problem of selecting a suitable hybrid algorithm for an unseen optimization problem, taking the approach of automated algorithm selection.
Algorithm Instance  F1  F2  F6  F8  F11  F12  F17  F18  F21  

rank  CMAES  658.933  2138.400  1653.667  2834.714  2207.400  5456.867  9248.600  13745.867  74140.538 
1  D_T1_B  2.472  1.175  2.261  3.177  1.640  2.362  1.907  9.397  0.592 
2  D_PB_B  2.546  1.213  2.321  4.031  1.643  2.580  1.258  5.324  1.072 
3  D_PB_E  3.176  1.483  3.635  5.152  1.700  2.750  1.584  4.350  0.305 
4  D_T1_E  3.060  1.477  3.583  3.670  1.660  2.281  2.036  9.112  0.352 
5  D_O1_B  3.152  1.466  3.717  4.155  6.360  8.818  1.445  8.405  0.383 
6  H_I_I_PB_E_P3  3.911  1.830  3.817  3.724  2.951  3.055  3.301  3.021  0.519 
7  H_I_I_PB_B_P3  3.685  1.694  3.117  3.115  2.912  3.047  2.102  3.222  1.063 
8  H_I_G_PB_B_P3  3.138  1.473  2.813  5.656  2.968  3.099  4.684  3.507  2.251 
9  H_I_I_T1_B_P3  3.599  1.700  3.155  5.106  2.837  2.670  2.914  3.975  0.727 
10  H_I_N_PB_B_P3  3.480  1.650  3.100  5.061  2.852  2.932  2.453  3.213  1.064 
…  …  …  …  …  …  …  …  …  …  … 
411  H_I_N_PB_B_P2  4.761  2.268  4.744  12.933  3.113  53.561  2.738  
412  H_D_N_T1_E_U3  29.656  38.499  22.459  25.214  5.091  9.053  22.333  8.645  1.247 
413  H_B_L_B2_E_U3  25.515  13.345  91.998  10.758  4.203  5.516  16.277  0.960  
414  H_F_L_O1_E_U3  19.585  9.980  94.563  18.771  5.529  12.265  161.662  7.416  3.586 
415  H_B_G_T1_E_P3  4.736  2.288  6.532  10.503  45.093  2.808  36.108  3.474  
416  H_B_N_B1_B_U2  6.531  3.029  8.313  6.918  93.749  13.117  28.817  19.629  
417  H_D_I_T1_E_P2  5.506  2.545  5.917  12.812  7.791  34.691  3.433  
418  H_D_M_O1_E_U3  21.270  10.963  33.571  12.992  5.882  7.250  12.577  5.760  1.192 
419  H_B_G_O1_B_P2  4.091  1.764  4.959  157.845  2.253  
420  H_F_L_T1_E_U3  26.450  15.383  17.706  12.174  4.609  9.334  16.892  53.541  1.822 
Algorithm Instance  F1  F2  F6  F8  F11  F12  F17  F18  F21  

rank  CMAES  830.800  16498.533  4018.600  19140.467  12212.267  15316.733  5846.400  17472.333  801759 
1  D_T1_B  7.377  0.864  5.912  3.702  2.678  4.699  3.144  3.604  0.385 
2  D_PB_B  7.731  0.901  6.884  6.766  3.833  5.999  3.158  1.719  0.193 
3  H_I_I_T1_B_P3  10.988  1.195  7.894  4.153  6.596  7.656  3.988  3.081  0.298 
4  H_I_M_T1_B_P3  12.621  1.434  9.714  5.296  8.389  8.152  4.979  3.138  0.186 
5  H_I_L_T1_B_P3  11.402  1.299  9.271  5.146  8.170  7.422  4.771  3.406  0.341 
6  H_I_N_T1_B_P3  10.641  1.202  8.218  4.705  7.253  7.928  4.325  2.741  0.338 
7  H_D_M_T1_B_P3  12.865  1.476  10.100  6.036  8.119  8.768  5.345  3.450  0.354 
8  D_B2_B  7.983  0.885  5.862  10.401  6.455  9.258  44.240  0.829  
9  H_D_G_T1_B_P3  9.031  1.074  7.910  4.419  5.690  8.078  4.079  7.838  0.695 
10  H_D_N_T1_B_P3  11.307  1.287  9.057  4.801  9.854  5.949  4.517  4.288  0.303 
…  …  …  …  …  …  …  …  …  …  … 
411  H_D_L_T1_B_U2  39.225  6.262  312.925  35.178  0.728  
412  H_F_M_T1_B_U2  55.045  5.655  34.213  0.360  
413  H_B_M_T1_E_P2  39.181  4.393  41.771  0.369  
414  P_F_N  53.733  1480.838  88.421  0.163  
415  H_I_M_T1_B_U2  40.014  7.379  313.468  35.252  0.546  
416  H_I_N_PB_B_U3  70.776  362.611  86.426  18.979  339.045  113.442  0.433  
417  H_I_M_B1_E_P2  33.073  3.734  72.629  35.327  0.876  
418  H_I_G_B2_B_U2  43.424  8.498  104.122  7.367  
419  H_B_G_PB_B_U2  41.308  16.007  50.786  1.054  
420  H_B_N_PB_B_P3  33.984  4.203  32.929  1.314 
Acknowledgments
Hao Wang acknowledges the support from the Paris ÎledeFrance Region.
References
 Cited by: §6.
 A 2opt based differential evolution for global optimization. Applied Soft Computing 10 (4), pp. 1200 – 1207. Note: Optimisation Methods & Applications in DecisionMaking Processes External Links: ISSN 15684946, Document, Link Cited by: item 5.
 The particle swarm  explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6 (1), pp. 58–73. External Links: Document, ISSN 1089778X Cited by: 3rd item.

Benchmarking discrete optimization heuristics with iohprofiler
. Applied Soft Computing, pp. 106027. Cited by: §1.  A new optimizer using particle swarm theory. Proceedings of the sixth international symposium on micro machine and human science, pp. 39––43. Cited by: §1, 1st item, 2nd item, §3.1, §3.
 Particle swarm optimization: velocity initialization. In 2012 IEEE Congress on Evolutionary Computation, Vol. , pp. 1–8. External Links: Document, ISSN 19410026 Cited by: item 1.
 COCO: a platform for comparing continuous optimizers in a blackbox setting. ArXiv eprints arXiv:1603.08785. Cited by: §1, §6, Figure 1.

A combined swarm differential evolution algorithm for optimization problems.
In
Proceedings of the 14th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems: Engineering of Intelligent Systems
, IEA/AIE ’01, Berlin, Heidelberg, pp. 11–18. External Links: ISBN 3540422196 Cited by: §2.  Performance comparison of differential evolution and particle swarm optimization in constrained optimization. Procedia Engineering 41, pp. 1323 – 1328. Note: International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) External Links: ISSN 18777058, Document, Link Cited by: §7.
 JADE: selfadaptive differential evolution with fast and reliable convergence performance. In 2007 IEEE Congress on Evolutionary Computation, Vol. , pp. 2251–2258. External Links: Document, ISSN Cited by: item 4, §4.2, 3rd item.
 Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Vol. 2, pp. 1671–1676 vol.2. External Links: Document, ISSN Cited by: 3rd item.
 Bare bones particle swarms. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS’03 (Cat. No.03EX706), Vol. , pp. 80–87. External Links: Document, ISSN null Cited by: §3.1.
 Dynamic multiswarm particle swarm optimizer. In Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005., Vol. , pp. 124–129. External Links: Document, ISSN Cited by: 5th item.
 The fully informed particle swarm: simpler, maybe better. IEEE Transactions on Evolutionary Computation 8 (3), pp. 204–210. External Links: Document, ISSN 19410026 Cited by: §3.1, 3rd item.
 Differential evolution based particle swarm optimization. In 2007 IEEE Swarm Intelligence Symposium, Vol. , pp. 112–119. External Links: Document, ISSN null Cited by: §2.
 Hybrid differential evolution  particle swarm optimization algorithm for solving global optimization problems. In 2008 Third International Conference on Digital Information Management, Vol. , pp. 18–24. External Links: Document, ISSN null Cited by: §2.
 Differential evolution vs. the functions of the 2/sup nd/ iceo. In Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC ’97), Vol. , pp. 153–157. External Links: Document, ISSN null Cited by: 6th item.
 A modified particle swarm optimizer. In 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), Vol. , pp. 69–73. External Links: Document, ISSN Cited by: §1, §3.1.
 Differential evolution: a simple and efficient adaptive scheme for global optimization over continuous spaces. Journal of Global Optimization 23, pp. . Cited by: §1, item 1, item 2, item 3, §4.
 Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary ComputationCEC99 (Cat. No. 99TH8406), Vol. 3, pp. 1958–1962 Vol. 3. External Links: Document, ISSN Cited by: 4th item.
 Autoweka: combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, New York, NY, USA, pp. 847–855. External Links: ISBN 9781450321747, Link, Document Cited by: §1.
 Evolving the structure of evolution strategies. In 2016 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. , pp. 1–8. External Links: Document, ISSN Cited by: §1, §2.
 Algorithm configuration data mining for cma evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO ’17, New York, NY, USA, pp. 737–744. External Links: ISBN 9781450349208, Link, Document Cited by: §1.

A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems
. In Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753), Vol. 2, pp. 1980–1987 Vol.2. External Links: Document, ISSN null Cited by: §7.  DEPSO: hybrid particle swarm with differential evolution operator. In SMC’03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme  System Security and Assurance (Cat. No.03CH37483), Vol. 4, pp. 3816–3821 vol.4. External Links: Document, ISSN 1062922X Cited by: §2.
Comments
There are no comments yet.