1 Introduction
Many swarm intelligence algorithms and their variants have been proposed to solve various scientific research and practical problems, such as neural architecture search (Stanley and Miikkulainen, 2002; Suganuma et al., 2020), airfoil design (Wang et al., 2017) and ordinary differential equations optimization (Usman et al., 2020), Because of its simple principle, few parameters and fast convergence ability, particle swarm optimization (PSO) (Eberhart and Kennedy, 1995) is still an important tool for solving problems (Li et al., 2015; Bonyadi and Michalewicz, 2017; Liu et al., 2019). There were a lot of reported studies on parameter setting, selection of neighborhood topology, improvement of learning strategy and hybridization of PSO with other algorithms (Tanweer et al., 2015) to balance exploration and exploitation.
The inertia weight (Shi and Eberhart, 1998) and the constriction coefficient (Clerc and Kennedy, 2002) could balance exploration and exploitation, by controlling the convergence tendency of the algorithm. Li et al. (2014) proposed adaptively multiswarm optimizer, which adaptively adjusted the number of population to solve dynamic optimization problems. Zhan et al. (2009) divided the state of the population into exploration, exploitation, convergence and jumping out according to evolutionary state defined by relative euclidean distance between particles, and adaptively adjusted the inertia weights and acceleration coefficients according to evolutionary state. Liu (2015) said that PSO was stable, if and only if inertia weight and acceleration coefficients , satisfied the following condition:
(1) 
where , and . When the required accuracy is low, a stable PSO often performs better than an unstable PSO.
According to different informationsharing mechanisms, topology affects the balance between exploration and exploitation. Heterogeneous idea is widely used in topology construction (Yang et al., 2018; Xu et al., 2019; Lynn and Suganthan, 2015; Xia et al., 2020). For PSO, heterogeneous idea means that the population is divided into subpopulations, some subpopulations are closely connected, while others are relatively isolated. Yang et al. (2018) equally partitioned particles into four levels according to fitness, and level learned from levels , level learned from levels , level learned from level , and level was not updated. Xu et al. (2019) proposed TSLPSO with two subpopulations, in which one subpopulation guides its particles to search locally, and the other subpopulation guides its particles to search globally. Except heterogeneous PSOs, Kennedy and Mendes (2002) compared the performance of gbest, lbest, pyramid, star, small and von Neumann topologies. Waintraub et al. (2007) arranged particles into a twodimensional cellular topology and constructed learning exemplars according to the best particle within the neighborhood. Xia et al. (2020) randomly generated the left and right neighbors of the particle.
Many learning strategies was proposed to replace the population historical best position and the personal historical best position . In CLPSO (Liang et al., 2006), particle could learn from its own with probability and learn from others selected by tournament selection with probability , and denotes learning probability. For particles to have different exploitation and exploration abilities, each particle was set a different value. Inspired by CLPSO, Li et al. (2012) constructed four types of learning exemplars for each particle, its own , of a random particle, , the position generated randomly. Zhan et al. (2011) proposed the orthogonal learning PSO (OLPSO), which searched the best combination of and to construct a learning exemplar for each particle. Inspired by OLPSO, Xu et al. (2019) used the and the combination of and as learning exemplars, independently.
Hybridization of PSO with other algorithms was another focus of researchers. Kıran and Gündüz (2013) recombined the global best solution of PSO and Artificial Bee Colony (ABC) to generate a new exemplar, named TheBest. And then TheBest is taken as for the PSO and neighbor of onlooker bees for the ABC. Gong et al. (2016)
combined Genetic Algorithm and PSO, proposed GLPSO. Gong crossed
and and mutated to construct the learning exemplars. Yang et al. (2018) introduced the idea of Differential Evolution (DE) into PSO and generated particles randomly as learning exemplars to solve large scale optimization.In most variants of PSO, the selection of learning exemplars depends on fitness directly. However, the exemplars with high fitness might lead the population into a local trapping region when applied to complex multimodal problems. To overcome the above problem, the fitness and the improvement rate of fitness were regarded as two metrics to evaluate particle performance (Xia et al., 2020). In fact, the selection method of the learning exemplars based on high fitness individuals implicitly adopts the idea of democratic voting, characterized by the majority predominance as well as the independence of personal judgment (Lorenz et al., 2011). However, the democratic methods tend to highlight the most popular opinion, not necessarily the most correct. In fact, the democratic decision is biased for shallow and common information, at the expense of novel or specialized knowledge that is not widely known and shared (Simmons et al., 2011; Chen et al., 2004). To overcome the limitations of democratic methods, Prelec et al. (2017) proposed a sociological decisionmaking method called surprisingly popular algorithm (SPA) (or called surprisingly popular decision), which could identify the answer with the largest surprisingly popular degree from the crowd knowledge distribution, and hence, protect the valuable knowledge known by the minority.
Based on the above consideration, we introduce a new metric based on SPA to evaluate particles, and verify the effectiveness of the new metric. The main contributions of our work are as follows:

Inspired by SPA, we guide the population by the particle with the maximal surprisingly popular degree, not just with high fitness.

We propose the adaptive euclidean distance topology and the surprisingly popular algorithmbased adaptive euclidean distance topology learning particle swarm optimization (SpadePSO) and analyze the influence of different topology on SPA, and then give the topology selection scheme under different dimensions.

We present the performance of SpadePSO in the CEC2014 benchmark suite and two practical problems, and SpadePSO is better than the compared PSOs.
The remaining parts of this paper are organized as follows: Section 2 introduces the related work briefly; Section 3 proposes the adaptive euclidean distance topology and SpadePSO and analyzes the influence of different topology on SPA; Section 4 presents the experimental validation; and finally, Section 5 presents the conclusion.
2 Related Work
We first introduce related PSO variants In 2.1 part of Section 2 . And then, we introduce smallworld network in 2.2 part of Section 2. Finally, we introduce the SPA in 2.3 part of Section 2.
2.1 PSO variants
Liang et al. (2006) proposed the comprehensive learning PSO (CLPSO), which was a famous PSO variant with strong exploration. In CLPSO, each dimension of a particle learns from its with probability and learns from the corresponding dimensions of the other particles’ selected by tournament selection with probability independently. The velocity update formula is as follows:
(2) 
where and denote the velocity and position of the th dimension of particle , respectively. is the acceleration coefficients, and
is uniformly distributed random numbers independently generated within [0, 1] for the
th dimension. is the inertia weight to control the velocity. is constructed by the comprehensive learning strategy (CLS). By making each dimension of particle learn from other particles with probability , the CLS can help particles escape from the local optimal region with higher probability, so that the CLS has stronger exploration.Lynn and Suganthan (2015) proposed the heterogeneous comprehensive learning PSO (HCLPSO). HCLPSO contains two heterogeneous subpopulations, named exploration subpopulation and exploitation subpopulation, both subpopulations use the CLS proposed in the literature (Liang et al., 2006). The particles of exploration subpopulation only learn from the personal best experiences of other particles in the same subpopulation dimensionally according to Eq. (2); whereas the particles of exploitation subpopulation learn from the personal best experience of particles in the whole population dimensionally and the populationbest experience traditionally according to Eq. (3):
(3) 
where denote the th of the population historical best position.
2.2 Smallworld network
Watts and Strogatz (1998) had shown that information transmission through social networks would be affected by three factors of network structure: the number of neighbors, the number of clusters and the average shortest path length from one node to another. He also proposed the smallworld network(also known as the WS smallworld network), where all the edges are reconnected on the basis of ring and regular networks with probability , so WS smallworld network was a network between regular networks and random networks.Newman and Watts (1999) proposed NW smallworld network, in which nodes would not break any connection between any two nearest neighbors, but instead, would add a connection between two nodes with probability . The NW smallworld network is somewhat easier to analyze than the WS smallworld network because it does not lead to the formation of isolated clusters, whereas this can indeed happen in the WS smallworld network.
Compared with the regular networks and the random networks, the small world networks could suppress the impact of individual particles in the population and maintain the population diversity. Hence, the small world network was widely used as connecting topologies in PSO (Gong and Zhang, 2013; Qiu et al., 2020; Liu et al., 2020).
2.3 Surprisingly popular decision
To give more weight to correct knowledge that may not be widely known, Prelec et al. (2017) proposed what they termed as the “surprisingly popular” decision method, which hinges on asking people two things about a given question: What do they think the right answer is, and how popular do they think each answer will be? For example, is Philadelphia the capital of Pennsylvania? As shown in Fig. 1, 80 of people do not know the geography of the United States, may only recall that Philadelphia is a large, famous, historically significant city in Pennsylvania, and conclude mistakenly that it is the capital. 20 of people who vote no probably possess an additional piece of evidence, that the capital is Harrisburg. As shown in Fig. 1, people with different knowledge have different perceptions of the popularity of their answers. People who know that Harrisburg is the provincial capital know that the popularity of their answers will be very low, while people who only know Philadelphia will blindly believe that most people have the same answers as themselves.
If majority voting is used to determine the answer, it is obvious that the wrong answer will be given to this question. Since the SPA could highlight the correct answer by inquiring of answer popularity, if somebody knows the right answer, but also realizes that answer for most people is wrong, both of these pieces of information can be expressed.
Based on inquiries of popularity, the SPA could highlight the correct answer. Lee et al. (2018) used the SPA to predict the winners of the National Football League (NFL) games and found that the SPA could predict better than many NFL media figures. To solve classification problems, Luo and Liu (2019)
asked each classifier to predict the performance of the other classifiers and learned the feedback of the other classifiers. SPA focused on choosing the right answer from a list of alternatives,
Hosseini et al. (2021) extended SPA to give a ground truth rank of alternatives.3 Our method
In 3.1 part of Section 3, we propose three definitions of the SPA calculation process and briefly review how to model the SPA in PSO (Cui et al., 2019). In 3.2 part of Section 3, we propose the adaptive euclidean distance topology. In 3.3 part of Section 3, we propose the surprisingly popular algorithmbased adaptive euclidean distance topology learning particle swarm optimization (SpadePSO), and then analyze the diversity of subpopulation. In 3.4 part of Section 3, we analyze the influence of different topologies on SPA and give the topology selection under different dimensions.
3.1 Modeling the surprisingly popular decision in PSO
To better understand the SPA, and carry out the followup work, we give the following three definitions of the SPA calculation process.
Definition 1 (Knowledge Prevalence Degree): It is the proportion of the total number of people in the group who know a certain knowledge. For example, the knowledge prevalence degree of “Philadelphia is the largest city in Pennsylvania” is , and the knowledge prevalence degree of “Harrisburg is the capital of Pennsylvania” is , as shown in Fig. 1.
Definition 2 (Expected Turnout):
It is the group’s estimation for the popularity of a given answer, which is the sum of each individual’s estimation. The individual’s estimation is the approximate proportion of people who have the same knowledge as them, each respondent can estimate the popularity of their answers through the knowledge prevalence degree. Respondents with two pieces of knowledge will estimate that the popularity of their answers is the product of the knowledge prevalence degree of the two pieces of knowledge. Respondents with only one piece of knowledge will estimate that the popularity of their answers is equal to its knowledge prevalence degree.
The expected turnout of the answer “No”:
Definition 3 (Surprisingly Popular Degree): It is the ratio of the actual turnout to the expected turnout. In the SPA, the actual turnout of the correct answer is much higher than its expected turnout, which has the highest surprisingly popular degree.
The surprisingly popular degree of the answer “Yes” is , while the surprisingly popular degree of the answer “No” is . Hence, the answer “No” is considered to be the right answer in the SPA.
Next, We will briefly review how to model the SPA in PSO (Cui et al., 2019). Denote the PSO population with particles as . The directed edge means that the particle has the privilege to assess the fitness of the particle and learn from the particle . The asymmetry adjacency matrix is defined as follows:
(4) 
For a PSO population with 5 particles, Fig. 2 illustrates our surprisingly popular decision model. For , the summarization of the th row is defined as the amount of knowledge of the particle . The percentage summarization of the th column is defined as the knowledge prevalence degree of the particle .
First, each particle votes for its neighbors given by the knowledge transfer topology as follows:
(5) 
where
is the fitness function. Define the voting vector is described by Eq.
6 and in Fig. 2.(6) 
Denote is the set of candidate exemplars and . Denote , so there are actually particles that are voted out as the promising particles (or say “answers”). In Fig. 2, and .
Count the voting results, and then, get the actual turnout of the particle , computing as Eq. 7. In Fig. 2, , and .
(7) 
Let the particle votes to be the most promising particle as the guide of searching direction, and then, the popularity of its answer is computed as Eq. 9. In Fig. 2, .
(9) 
Since all opposite answers of the particle share the popularities , each popularity of all opposite answers is defined as Eq. 10. In Fig. 2, .
(10) 
The average summarization of popularities of from all particles is taken as the expected turnout, denoted as , which is defined as Eq. 11. In Fig. 2, , and .
(11) 
Hereby, the surprisingly popularity degree, , for each candidate exemplar , could be defined as Eq. 12. In Fig. 2, , and .
(12) 
Finally, the exemplar , the particle with the maximal surprisingly popular degree, deriving from the SPA could be selected as follows:
(13) 
(14) 
As an example shown in Fig. 2, the No.3 particle is , with the maximal surprisingly popular degree 2.81.
3.2 The strategy of adaptive euclidean distance topology
One of the most important parts of SPA is the voting process, which selects the exemplar to guide the search direction. In this process, each particle is modelled as an agent possessing knowledge including fitness, position and velocity of its neighbors specified by the given knowledge transfer topology . Therefore, the reasonable topology has a great inﬂuence on the SPA.
Cui et al. (2019) defined a knowledge transfer topology , which updating knowledge topology adaptively. But in the initial stage, the connection based on the serial number can not reflect the fitness and position information of particles. Therefore we propose a new knowledge transfer topology in this part.
The primary aspect for particles to consider when determining and adjust their knowledge adaptively is the relative position information of particles in a multidimensional space, since particles with similar positions have a lot of overlapping search space. The topology composed of particles with similar positions has excellent clustering performance, which cannot only ensure the diversity of the population, but also speed up the convergence of the algorithm.
In the initial state, suppose that the knowledge transfer topology is , which is a directed graph with vertex set and edge set and is constructed based on distance information with all particle outdegrees . Suppose that is the Euclidian distance matrix of all particles. Particle will connect the first particles with the shortest Euclidean distance.
To meet the constraints of smallworld networks, the outdegree of the particle in , denoted as (Upper limit of knowledge about the distance information), increases linearly with the iterations for the successive iterationand the updating formula is as follows:
(15) 
where and are the current and the maximal iteration numbers, respectively, is the speed at which the upper limit of the knowledge of distance information.
In Fig. 3 is set to 4 (The graph allows selfloops, each node has a selfconnected edge, which is not shown in figure.), the exploration subpopulation size is set to 3, the exploitation subpopulation size is set to 5 and the dimension is set to 2 (Heterogeneous populations are described in part 3.3). It is worth noting that all particle has the same outdegree, but different indegree during the iteration. It is clear that particle 1, in the center of the graph, has the most indegree, so that has been learned by the most particles. Particles 2 and 4, at the bounds of the current search space, have the least indegree, so that has been learned by the least particles. It is possible to find potential surprisingly popular exemplars in particles that are less learned or at the bounds of the current search space, which can change the search direction and expand the search space reasonably.
Except , we use the temporary directed graph to speed up the convergence of the algorithm. In every iteration, is generated by the largest fitness value particles with a certain probability, named experts, and is the descendingly ranking order of the particle according to fitness. Define is a number of experts and is the set of experts. The generating rule of the adjacency matrix of is as follows:
(16) 
where is the reconnection probability defined as follows:
(17) 
Suppose the relative position of the population is shown in Fig. 4. SPA is made in the th iteration according to and . Supposed in is set to 3. Each particle connects 2 particles with the shortest Euclidean distance and connects itself. Specifically, the jointdirected graph could be generated and used to vote by using SPA. Supposed in increases by 1 according to Eq. (16), Each particle connects 3 particles with the shortest Euclidean distance and connects itself.
3.3 SpadePSO
To balance exploration and exploitation, we propose the surprisingly popular algorithmbased adaptive euclidean distance topology learning particle swarm optimization (SpadePSO), which contains two heterogeneous subpopulations. The velocity of the particle in exploration subpopulation is updated according to Eq. (2), which endows the subpopulation strong global exploration ability. The velocity of the particle in exploitation subpopulation is updated according to the Eq. (18), which endows the subpopulation strong local exploitation ability:
(18) 
where denote the th dimension of the particle with the maximal surprisingly popular degree, used to guide the exploitation subpopulation.
In the exploration subpopulation, particles only use the CLS to learn only from the personal best experience of the other particles selected randomly from this same subpopulation. Hence, there is no accumulation of learning experience about search direction in the exploitation subpopulation, which has high population diversity and strong exploration ability. In the exploitation subpopulation, particles learn not only from the exemplar constructed by the SPA, but also from the best experience of the entire population. As depicted in fig. 3, the knowledge transfer topology consists of the entire population, and is constructed by the entire population. Therefore, the exploitation subpopulation has stronger exploitation ability. The flowchart is shown in Fig. 5 and the source code of SpadePSO can be downloaded from https://github.com/wuuu110/SpadePSO.
Population diversity can be used to measure exploration and exploitation (Olorunda and Engelbrecht, 2008). To more intuitively measure the exploration and exploitation abilities of the two subpopulations of the SpadePSO, we compare the diversities of the exploration subpopulation, the exploitation subpopulation and the whole population of SpadePSO. The population diversity is calculated according to the following formulas.
(19) 
(20) 
where is the population size, is the dimension of the search space, denotes the th dimension of the th particle, denotes the th dimension of the center position of the population.
In Fig. 6, we present the diversity curves of some CEC2014 benchmark suite functions (F1, F4, F17 and F23) (Liang et al., 2014), where the dimension of the search space, the population size and the maximum number of function evaluations are, 30, 40 and 7500, respectively. The results show that the exploration subpopulation maintains the highest diversity, which can ensure that SpadePSO jumps out of a local trapping region. The exploitation subpopulation keeps the smallest diversity, consequently, converges rapidly. To sum up, the heterogeneous population design has reached our original intention.
3.4 Influence of different topology on SPA
Reasonable topology can improve the performance of the algorithm. To prove the advantages of our topology, we compare the performance of the SPA with three different topologies on the CEC2014 benchmark suite (Liang et al., 2014). The CEC2014 benchmark suite consists of 30 test functions and the different dimensions, including D = 10, 30, 50 and 100, are tested. The rest of the experimental details are presented in Section 4.
The first topology is proposed in part 3.2. In the initial stage, the connections between particles are determined by the Euclidean distance. During the iteration, except for the Euclidean distance information, selected particles with high fitness value are used to speed up the convergence of the algorithm. The second is the topology of SPACatlePSO proposed in (Cui et al., 2019). In the initial stage, each particle is unidirectionally connected to particles numbered from to . During the iteration, particles are guided according to the learning experience in the search direction. And selected particles with high fitness value are used. If particle votes to and its fitness value is improved in the current iteration, the particle will connect to in the next iteration. In the third topology, we combine the first and second topologies. In the initial stage, the connections between particles are determined by the Euclidean distance. During the iteration, particles are guided according to the learning experience in the search direction, Euclidean distance information and particles with high fitness value. If particle votes to and its fitness value is improved in the current iteration, the particle will connect to in the next iteration.
To make the experimental results more convincing, we use the wellknown nonparametric statistics analysis Wilcoxon signed ranks test (Derrac et al., 2011) to check whether the algorithm performance priority is significant. The comparing results are based on the mean value of errors. TABLE 2 presents the Wilcoxon signed ranks test of the SPA with three different topologies on 30 functions with 10, 30, 50, 100 dimensions, respectively. The symbols +, , indicate that the first topology performs significantly better (+), significantly worse (), or not significantly different () compared to the given algorithm. The number in brackets in the table is the pvalue between the two algorithms. An algorithm can be considered significantly better than another at different significant levels, if its pvalue is less than 0.1.
As shown in TABLE 2, the first topology performs best in 30, 50 and 100 dimensions but worst in 10 dimensions. The first topology is significantly better than the second topology in 30 dimensions and better than the third topology in 100 dimensions. Therefore, if the problem is multidimensional, the most suitable topology for SPA is the first topology proposed in this paper.

[width=6em]signDimension  10D  30D  50D  100D  
+  13  19 (0.05)  18  18  
  16  10  12  11  
the second topology  1 (0.61)  1  0(0.33)  1 (0.21)  
+  13  15  16  17(0.08)  
  15  14  13  11  
the third topology  2 (0.82)  1 (0.74)  1(0.84)  2 
4 Evaluating SpadePSO
There are 4 parts in this Section. 4.1 part of Section 4 includes parameters selection of SpadePSO. 4.2 part of Section 4 evaluates the performance of SpadePSO on the full CEC2014 benchmark suite (Liang et al., 2014). 4.3 and 4.4 parts of Section 4 are designed for practical problems.
In 4.1 and 4.2 parts of Section 4, The test problem is the CEC2014 benchmark suite(Liang et al., 2014). The CEC2014 benchmark suite consists of 30 test functions. Functions F1 F3 are unimodal. F4 F16 are simple multimodal functions. F17 F22 are hybrid functions. Finally, F23 F30 are composite functions combing multiple test problems into a complex landscape. The 30dimension is tested in 4.1 part of Section 4. The different dimensions, including = 10, 30, 50 and 100, are tested in 4.2 part of Section 4. For all functions, the search range is . In 4.3 part of Section 4, We evaluate the performance of SpadePSO on a widely used practical optimization problem known as the spread spectrum radar polyphase code design, whose dimension is 20. Modeling of dynamic systems in the field of physical, biology and chemistry are commonly achieved by Ordinary Differential Equations. In 4.4 part of Section 4, We use SpadePSO to infer the parameters and structure of the HIV model whose dimension is 15.
The population size is set uniformly as 40. The maximum value of velocity is set to 10% of the search range and the maximum number of objective function calls per run is 10000 (Suganthan et al., 2005). The number of runs per function is 30.
All experiments are executed on the system described as follows:

OS: Windows 10

CPU: core i7 (2.90GHz)

RAM: 16GB
4.1 Algorithm for Parameter selection
For the proposed SpadePSO algorithm, there are 3 parameters to be adjusted to fully maintain the application environment of SPA and ensure the convergence speed. These parameters include the outdegree of each particle in the smallworld topology in the initial state (), the upper limit of degree changing speed () in Eq. (15) and the number of expert particles () in the temporary matrix in Eq. (16). The size of exploitation subpopulation and exploration subpopulation in SpadePSO is 5:3. Traditional parameters, including the size of subpopulation and the acceleration coefficients (, , ), are used as the same as those in HCLPSO (Lynn and Suganthan, 2015).
In this part, the parameters are tuned using the first 16 functions of the CEC2014 benchmark suite. The results are ranked based on the mean value of errors in TABLE 3. According to this final rank, we chose = 2 and = 6 to test the expert number . Finally, = 5 is the optimal expert number. After the parameter trials, the optimal parameters are listed in TABLE 4.
Friedman test of without the expert temporary graph  
2  4  6  8  10  
2  4  6  8  10  
Ave. rank  3.19  2.66  2.84  3.16  3.16  
Final rank  5  1  2  3  4  
Friedman test of and without the expert temporary graph  
1  2  3  4  5  6  7  8  
7  6  5  4  3  2  1  0  
Ave. rank  4.13  3.88  4.84  4.49  4.59  5.19  5.22  5.22  
Final rank  3  1  6  4  5  7  8  2  
Friedman test of with = 2, = 6  
2  3  4  5  6  
Ave. rank  3.28  2.97  3.06  2.75  2.94  
Final rank  5  3  4  1  2 
Algorithm  Parameters settings  Ref.  
1.  PSO  w : 0.90.4, = = 2  (Eberhart and Kennedy, 1995)  
2.  CLPSO  w : 0.90.4, = = 1, = 0.5  (Liang et al., 2006)  
3.  OLPSO  w : 0.90.4, = = 1, = 0.5  (Zhan et al., 2011)  
4.  LSHADE  = 18, = 2.6, p = 0.11, H = 6  (Tanabe and Fukunaga, 2014)  
5.  HCLPSO 

(Lynn and Suganthan, 2015)  
6.  GLPSO 

(Gong et al., 2016)  
7.  TSLPSO  w : 0.90.4, = = 1.5, : 0.52.5  (Xu et al., 2019)  
8.  XPSO  = 0.5, = 5, =0.2  (Xia et al., 2020)  
9.  LatinPSO  w = 0.7, = 1.5, = 1.5  (Tian et al., 2019)  
10.  SpadePSO 

4.2 The CEC2014 benchmark suite
The experiment is performed with the CEC2014 benchmark suite to verify the strength of the proposed algorithms: SpadePSO. The algorithms in comparison include PSO (Eberhart and Kennedy, 1995), CLPSO (Liang et al., 2006), OLPSO (Zhan et al., 2011), LSHADE (Tanabe and Fukunaga, 2014), HCLPSO (Lynn and Suganthan, 2015), GLPSO (Gong et al., 2016), TSLPSO (Xu et al., 2019), XPSO (Xia et al., 2020) and SpadePSO, where HCLPSO, TSLPSO and XPSO are algorithms that improved the neighborhood topology of the population. CLPSO proposed the comprehensive learning strategy which has stronger exploration. OLPSO used the orthogonal learning strategy to construct the learning exemplar. LSHADE, one of the stateoftheart Differential Evolution (DE) algorithms, was the champion algorithm of the CEC2014 benchmark suite. HCLPSO and TSLPSO were heterogeneous particle swarm optimization, with two subpopulations responsible for exploitation and exploration, respectively. GLPSO integrated the advantages of the genetic algorithm into PSO by introducing crossover, mutation and selection operations to construct the learning exemplars. In XPSO, particles learned from both locally and globally best particles, and dynamically update the topology. The parameter settings of each algorithm are listed in TABLE 4.
TABLE 5 presents the Wilcoxon signed ranks test of the proposed SpadePSO on 30 functions with 10, 30, 50, 100 dimensions, respectively. The comparing results are based on the mean value of errors.
As shown in TABLE 5, SpadePSO is significantly better than PSO, OLPSO, GLPSO and XPSO in 10, 30, 50 and 100 dimensions. The performance of SpadePSO and CLPSO is very similar on 30, 50 and 100 dimensions. However compared with CLPSO, SpadePSO has stronger exploitation, which is the reason that SpadePSO performs better on 10dimension functions. With the increase of dimensions, SpadePSO performs better than TSLPSO. It is very interesting that SpadePSO behaves significantly better than HCLPSO on 10 and 50 dimensions but only a little better than HCLPSO on 30 and 100 dimensions. They are all heterogeneous algorithms and the mechanism of heterogeneous algorithms is very complicated and worth further investigation. LSHADE is significantly better than PSO and all variants. Papers (Wu et al., 2018; Brest et al., 2016) show the performance of other variants of DE algorithm on the CEC2014 benchmark suite. It can be clearly observed that other variants of DE algorithm are better than PSO algorithms. Hence, the essential difference between DE and PSO is worth deliberate investigation.

[width=6em]signDimension  10D  30D  50D  100D  
+  29 (0.00)  23 (0.01)  28 (0.00)  28 (0.00)  
  1  7  2  2  
PSO(1995)  0  0  0  0  
+  19 (0.08)  15  15  17  
  11  15  14  13  
CLPSO(2006)  0  0 (0.88)  1 (0.58)  0 (0.88)  
+  25 (0.00)  19 (0.01)  22 (0.00)  20 (0.02)  
  5  11  7  10  
OLPSO(2011)  0  0  1  0  
+  4  1  2  4  
  25 (0.00)  28 (0.00)  24 (0.00)  26 (0.00)  
LSHADE(2014)  1  1  4  0  
+  25 (0.01)  20  21 (0.05)  19  
  5  10  8  10  
HCLPSO(2015)  0  0 (0.32)  1  1 (0.27)  
+  29 (0.00)  24 (0.00)  26 (0.00)  24 (0.00)  
  1  6  4  6  
GLPSO(2016)  0  0  0  0  
+  17  19  15  18 (0.09)  
  13  11  14  12  
TSLPSO(2019)  0 (0.73)  0 (0.19)  1 (0.19)  0  
+  25(0.00)  26(0.00)  27(0.00)  25 (0.00)  
  4  4  3  5  
XPSO(2020)  1  0  1  0 
4.3 Spread spectrum radar polyphase coding design
In this part, we take the optimization problem of spread spectrum radar polyphase coding design (SSRP) as a practical problem to verify the performance of SpadePSO. Since its NPhard property and the fact that its fitness function is piecewise smooth, SSRP is widely used as the optimization object of swarm intelligence algorithms (Mladenović et al., 2003; Das et al., 2009). We adopt the minmax nonlinear optimization problem model (Dukic and Dobrosavljevic, 1990) as the fitness function to solve the problem of SSRP, the formula is as follows:
(21)  
where , and
(22)  
As shown in TABLE 6, the mean and standard deviation of SpadePSO are both lower than the others. The best objective value of XPSO is 1.03, which is the optimal fitness, and its mean is 1.55. The mean of PSO is similar to SpadePSO, but its standard deviation is larger.
[width=6.45em]Stat.Algorithm 







SpadePSO  
best  1.31  1.40  1.32  1.31  1.15  1.23  1.03  1.23  
mean  1.55  1.75  1.78  1.60  1.77  1.61  1.55  1.50  
std  0.19  0.12  0.26  0.15  0.23  0.14  0.23  0.11  
rank  3  6  8  5  7  4  3  1 
4.4 Ordinary differential equations models inference
Cause of the requirement of a higher computational, representation method of structure and design of the solution space, inferring the structure and parameters of the ordinary differential equations models simultaneously is a more challenging task (Usman et al., 2020; Heinonen et al., 2018). To prove the effectiveness of SpadePSO, we infer the structure and parameters of the HIV model from scratch, which means we unknown anything about variables, parameters and interrelation between items in the initial stage. The HIV model is described by Eqs. (23).
(23)  
In the HIV model, the variables are , , and , where is the number of uninfected cells, the number of infected CD4+T lymphocytes and the number of free viruses. The initial conditions (variable values at time ) is as follows: , , and . The HIV model is shown in Eqs (23).
Tian et al. (2019) defined the general form of an HIV as shown in Eq. (24). It is assumed that the equation of the HIV model is combined by four different items, including two singlevariable items, one twovariable item and one item without variable. The interrelation between two items within an equation is defined by the addition and subtraction operator.
(24) 
where , and are parameters of the equation, represents one of the addition or subtraction operator, represents a variable in the model except for and and represent any variable in the model, respectively. For example, if , represents one of the state variables and , and represent one of the state variables , and .
The first variable must exist in Eq. (24) because Eq (24) is a differential equation about , so we don’t code into the representation of Eq. (24). Therefore, The representation method of Eq. (24) is and there are 192 structures as shown in TABLE 7. To solve the problem, we use the serial number to represent structure as a dimension of the problem. Since the HIV model has three equations, the dimension of the problem is 15 of which 12 dimensions represent the parameters and 3 dimensions represent the structure of equations.
Serial number  ODE structures 
The objective of an individual is defined as the sum of the squared error and shown in Eq. (25).
(25) 
where, is the starting time, the step size, is the number of the state variables, and the number of data points. is the given target time course data. is the time course data acquired by calculating the system of ODE represented by a particle. If the inferred model is the same as the HIV model, the objective value is 0. On the contrary, the larger the objective value, the greater the difference between the inferred model and the HIV model.
The algorithms in the comparison include PSO, CLPSO, LatinPSO (Tian et al., 2019) and SpadePSO. LatinPSO is designed to infer the structure and parameters of the ordinary differential equations models. The parameter settings of each Algorithm are listed in TABLE 4. The iteration number is set to 3750 calculated according to (Suganthan et al., 2005) and the population size is set to 40, 400, 1000, independently.
As shown in TABLE 8, the results show that SpadePSO performs better than CLPSO and LatinPSO, when the population size is set 40. The mean of SpadePSO is half of CLPSO and onefourth of LatinPSO. The performance of SpadePSO is similar to PSO. The best solution is very important because it is usually used to solve practical problems, and the best solution shows that there is still a big gap between the model inferred from the four PSOs and the HIV model. So the population size is set to 400 and 1000 to look for a better solution and the iteration number is also set to 3750. When the population size is 400, The mean of SpadePSO is onefourth of LatinPSO and a half of PSO and CLPSO. What’s more, the best solution of SpadePSO is significantly better than the other PSOs. The best model inferred from SpadePSO is shown in Eqs. (26) and we compare the best model inferred from SpadePSO and the HIV model shown in Figs. 7. When the population size is 1000, the improvement of CLPSO is not obvious. PSO and LatinPSO also get a betterinferred model in the experiment, but the population size is set 1000, and SpadePSO obtains similar results when the population size is 400. The best model inferred from SpadePSO is shown in Eqs. (27).
Population size.  40  400  1000  
[width=6em]Algorithm.Stat.  Best  Mean  Std  Best  Mean  Std  Best  Mean  Std 
PSO  1.18E+04  4.54E+04  6.08E+04  1.16E+04  2.36E+04  2.10E+04  3.64E+03  2.63E+04  2.92E+04 
CLPSO  1.77E+04  1.05E+05  9.32E+04  1.02E+04  2.57E+04  1.86E+04  1.55E+04  2.24E+04  5.21E+03 
LatinPSO  2.39E+04  2.04E+05  1.13E+05  2.12E+04  4.01E+04  3.76E+04  2.05E+03  1.99E+04  1.43E+04 
SpadePSO  1.17E+04  4.74E+04  6.40E+04  3.39E+03  1.49E+04  5.53E+03  1.11E+03  9.69E+03  6.14E+03 
(26)  
(27)  
For Ordinary differential equations models inference, a large population size seems to be necessary for recent studies. We think a better representation method of structure can reduce the demand of population size in the future. Similar to the nonadjacency of 7 and 8 in binary coding, we think that the discontinuity of the representation method of the structure increases the difficulty of this problem. How to better represent the structure remains to be studied.
5 Conclusion
Inspired by the SPA in social science fields, we use fitness and surprisingly popular degree as the criterion to judge particle performance to select the learning exemplars. We propose the SpadePSO with adaptive communication topology dynamic topology and compare different topologies to construct the best topology for SPA.
We evaluate the performance of SpadePSO with the CEC2014 benchmark suite on 10, 30, 50, and 100dimensional functions in Experiment 4.2. The experimental results show that SpadePSO outperforms the stateoftheart PSO variants, including PSO, OLPSO, HCLPSO, GLPSO, TSLPSO and XPSO. In addition, we compared SpadePSO with LSHADE which was one of the stateoftheart optimization algorithms of the CEC2014 benchmark suite. There is an interesting phenomenon that Lshade, as a champion algorithm in CEC2014, performs still much better than PSO variants, including the proposed SpadePSO. The essential difference between DE and PSO is worth deliberate investigation. In Experiment 4.3, we evaluate the performance of SpadePSO on the spread spectrum radar polyphase coding design. The results show that SpadePSO performs better than other PSO variants. Finally, we evaluate the performance of SpadePSO on the ordinary differential equations models inference in Experiment 4.4. The results show that SpadePSO performs better than LatinPSO which is specially designed for this problem. In short, experiments show the effectiveness of SPA, and the metric based on SPA to evaluate particles can also be applied to other algorithms.
6
The CEC2014 benchmark suite are shown in TABLE 9.
Function Name  Search Range  F()  
Unimodal Functions  F1: Rotated High Conditioned Elliptic Function  100  
F2: Rotated Bent Cigar Function  200  
F3 Rotated Discus Function:  300  
Simple Multimodal Functions  F4: Shifted and Rotated Rosenbrock’s Function  400  
F5: Shifted and Rotated Ackley’s Function  500  
F6: Shifted and Rotated Weierstrass Function  600  
F7: Shifted and Rotated Griewank’s Function  700  
F8: Shifted Rastrigin’s Function  800  
F9: Shifted and Rotated Rastrigin’s Function  900  
F10: Shifted Schwefel’s Function  1000  
F11 Shifted and Rotated Schwefel’s Function  1100  
F12: Shifted and Rotated Katsuura Function  1200  
F13: Shifted and Rotated HappyCat Function  1300  
F14: Shifted and Rotated HGBat Function  1400  

1500  

1600  
Hybrid Functions  F17: Hybrid Function 1 (N = 3)  1700  
F18: Hybrid Function 2 (N = 3)  1800  
F19: Hybrid Function 3 (N = 4)  1900  
F20: Hybrid Function 4 (N = 4)  2000  
F21: Hybrid Function 5 (N = 5)  2100  
F22: Hybrid Function 6 (N = 5)  2200  
Composition Functions  F23: Composition Function 1 (N = 5)  2300  
F24 Composition Function 2 (N = 3)  2400  
F25: Composition Function 3 (N = 3)  2500  
F26: Composition Function 4 (N = 5)  2600  
F27: Composition Function 5 (N = 5)  2700  
F28: Composition Function 6 (N = 5)  2800  
F29: Composition Function 7 (N = 3)  2900  
F30: Composition Function 8 (N = 3)  3000 
7
The columns show the best, mean and standard deviation of errors between the best fitness values found in each run and the true optimal values, respectively. The experimental results are shown in TABLE 10.
Dim.  10D  30D  50D  100D  
[width=4em]Func.Stat.  Best  Mean  Std  Best  Mean  Std  Best  Mean  Std  Best  Mean  Std 
1  7.75E+01  1.09E+04  1.34E+04  2.79E+04  2.11E+05  1.52E+05  3.43E+05  6.31E+05  2.45E+05  1.39E+06  3.05E+06  8.78E+05 
2  5.32E02  3.74E+01  5.00E+01  6.89E05  1.11E+01  2.69E+02  3.58E+00  1.70E+02  3.33E+02  3.29E+00  5.62E+02  8.03E+02 
3  9.59E03  6.43E+01  9.44E+01  3.61E01  1.28E+02  1.28E+02  1.37E+02  1.68E+03  9.72E+02  2.68E+02  1.89E+03  1.30E+03 
4  2.05E03  1.44E+00  3.71E+00  3.30E02  3.91E+01  3.26E+01  2.17E+01  8.59E+01  2.01E+01  1.11E+02  1.98E+02  3.67E+01 
5  0.00E+00  1.85E+01  5.40E+00  2.01E+01  2.02E+01  3.92E02  2.00E+01  2.03E+01  9.49E02  2.00E+01  2.02E+01  1.68E01 
6  1.13E04  1.25E02  2.17E02  6.26E01  2.11E+00  1.10E+00  2.31E+00  9.46E+00  3.04E+00  4.03E+01  5.55E+01  6.13E+00 
7  3.25E03  4.22E02  2.28E02  0.00E+00  2.87E04  1.11E03  0.00E+00  5.76E03  7.24E03  0.00E+00  1.39E03  3.85E03 
8  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00  5.58E02  2.34E01 
9  1.53E+00  4.48E+00  1.68E+00  1.59E+01  4.35E+01  1.12E+01  3.98E+01  9.22E+01  2.33E+01  1.84E+02  2.79E+02  3.66E+01 
10  0.00E+00  1.84E01  6.55E01  4.17E02  4.91E01  8.01E01  1.13E01  1.22E+01  3.52E+01  2.52E01  1.74E+01  4.69E+01 
11  1.87E+01  1.60E+02  1.21E+02  1.17E+03  1.91E+03  3.15E+02  3.07E+03  4.22E+03  4.34E+02  9.30E+03  1.11E+04  7.00E+02 
12  5.60E02  2.10E01  5.90E02  5.51E02  2.39E01  5.83E02  1.03E01  2.13E01  5.82E02  1.75E01  2.87E01  7.33E02 
13  2.54E02  8.67E02  3.37E02  1.09E01  2.27E01  6.27E02  2.22E01  3.30E01  5.84E02  2.84E01  4.13E01  5.10E02 
14  2.61E02  8.26E02  3.43E02  1.66E01  2.36E01  3.11E02  1.94E01  2.69E01  2.91E02  2.50E01  3.15E01  2.60E02 
15  3.99E01  7.66E01  2.15E01  1.52E+00  4.32E+00  1.53E+00  4.84E+00  9.05E+00  2.20E+00  2.19E+01  3.69E+01  8.04E+00 
16  2.07E01  1.32E+00  4.28E01  7.30E+00  9.36E+00  6.76E01  1.42E+01  1.77E+01  9.49E01  3.89E+01  4.05E+01  6.24E01 
17  1.88E+01  8.49E+02  7.01E+02  1.49E+03  8.82E+04  5.61E+04  1.83E+04  1.32E+05  1.07E+05  2.64E+05  8.04E+05  3.24E+05 
18  4.24E+00  3.02E+02  6.94E+02  3.19E+01  1.49E+02  1.36E+02  6.28E+01  1.52E+02  5.10E+01  2.51E+02  3.99E+02  1.48E+02 
19  3.91E02  5.64E01  3.66E01  2.84E+00  4.71E+00  1.12E+00  8.37E+01  1.34E+01  3.01E+00  2.94E+01  7.41E+01  1.81E+01 
20  2.44E+00  3.07E+01  4.62E+01  1.05E+02  8.56E+02  5.91E+02  3.44E+02  1.29E+03  8.34E+02  1.62E+03  4.50E+03  1.94E+03 
21  6.62E+00  8.53E+01  8.13E+01  3.51E+03  3.86E+04  3.35E+04  2.43E+04  1.72E+05  1.68E+05  1.29E+05  5.03E+05  3.74E+05 
22  1.98E02  1.94E+00  5.12E+00  2.81E+01  1.91E+02  7.33E+01  1.59E+02  6.93E+02  1.85E+02  1.12E+03  1.88E+03  3.13E+02 
23  3.29E+02  3.29E+02  2.59E13  3.15E+02  3.15E+02  1.90E12  3.44E+02  3.44E+02  1.03E12  3.48E+02  3.48E+02  2.93E11 
24  1.00E+02  1.12E+02  2.93E+00  2.24E+02  2.25E+02  1.12E+00  2.56E+02  2.60E+02  3.60E+00  3.57E+02  3.62E+02  1.98E+00 
25  1.04E+02  1.31E+02  2.18E+01  2.03E+02  2.06E+02  1.50E+00  2.07E+02  2.14E+02  2.76E+00  2.46E+02  2.56E+02  5.10E+00 
26  1.00E+02  1.00E+02  2.96E02  1.00E+02  1.00E+02  5.51E02  1.00E+02  1.06E+02  2.24E02  1.00E+02  1.96E+02  1.94E01 
27  9.45E01  4.44E+01  1.06E+02  3.45E+02  3.97E+02  1.52E+01  4.25E+02  5.45E+02  8.30E+02  5.09E+02  1.54E+03  4.88E+02 
28  3.44E+02  3.75E+02  2.25E+01  7.66E+02  8.88E+02  4.81E+01  1.17E+03  1.52E+03  2.01E+02  3.46E+03  5.26E+03  7.65E+02 
29  2.33E+02  2.79E+02  3.56E+01  7.95E+02  9.52E+02  1.05E+02  9.27E+02  1.21E+03  1.85E+02  1.36E+03  1.74E+03  3.06E+02 
30  4.96E+02  6.58E+02  1.01E+02  1.11E+03  1.87E+03  4.42E+02  8.55E+03  1.05E+04  9.50E+02  7.87E+03  9.68E+03  8.32E+02 
(D = 10, 30, 50 and 100).
References
 Particle Swarm Optimization for Single Objective Continuous Space Problems: A Review. Evolutionary Computation 25 (1), pp. 1–54 (en). External Links: ISSN 10636560, 15309304, Link, Document Cited by: §1.
 iLSHADE: Improved LSHADE algorithm for single objective realparameter optimization. In 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, pp. 1188–1195 (en). External Links: ISBN 9781509006236, Link, Document Cited by: §4.2.
 Eliminating Public Knowledge Biases in InformationAggregation Mechanisms. Management Science 50 (7), pp. 983–994 (en). External Links: ISSN 00251909, 15265501, Link, Document Cited by: §1.
 The particle swarm  explosion, stability, and convergence in a multidimensional complex space. IEEE Transactions on Evolutionary Computation 6 (1), pp. 58–73 (en). Note: Number: 1 External Links: ISSN 1089778X, Link, Document Cited by: §1.
 Surprisingly Popular AlgorithmBased Comprehensive Adaptive Topology Learning PSO. In 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, pp. 2603–2610 (en). External Links: ISBN 9781728121536, Link, Document Cited by: Figure 2, §3.1, §3.2, §3.4, §3.
 Differential Evolution Using a NeighborhoodBased Mutation Operator. IEEE Transactions on Evolutionary Computation 13 (3), pp. 526–553 (en). External Links: ISSN 1089778X, Link, Document Cited by: §4.3.
 A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm and Evolutionary Computation 1 (1), pp. 3–18 (en). External Links: ISSN 22106502, Link, Document Cited by: §3.4.
 A method of a spreadspectrum radar polyphase code design. IEEE Journal on Selected Areas in Communications 8 (5), pp. 743–749 (en). External Links: ISSN 07338716, Link, Document Cited by: §4.3.
 A new optimizer using particle swarm theory. In MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, pp. 39–43 (en). External Links: ISBN 9780780326767, Link, Document Cited by: §1, §4.2, Table 4.
 Genetic Learning Particle Swarm Optimization. IEEE Transactions on Cybernetics 46 (10), pp. 2277–2290 (en). External Links: ISSN 21682267, 21682275, Link, Document Cited by: §1, §4.2, Table 4.
 Smallworld particle swarm optimization with topology adaptation. In Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference  GECCO ’13, Amsterdam, The Netherlands, pp. 25 (en). External Links: ISBN 9781450319638, Link, Document Cited by: §2.2.

Learning unknown ODE models with Gaussian processes.
In
Proceedings of the 35th International Conference on Machine Learning
, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 1959–1968. External Links: Link Cited by: §4.4. 
Surprisingly popular voting recovers rankings, surprisingly!.
In
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI21
, Z. Zhou (Ed.), pp. 245–251. Note: Main Track External Links: Document, Link Cited by: §2.3.  Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No.02TH8600), Vol. 2, Honolulu, HI, USA, pp. 1671–1676 (en). External Links: ISBN 9780780372825, Link, Document Cited by: §1.
 A recombinationbased hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems. Applied Soft Computing 13 (4), pp. 2188–2203. External Links: ISSN 15684946, Link, Document Cited by: §1.
 Testing the ability of the surprisingly popular method to predict NFL games.. Judgment and Decision Making 13 (4), pp. 322–333. Note: Place: US Publisher: Society for Judgment and Decision Making External Links: ISSN 19302975(Print) Cited by: §2.3.
 A SelfLearning Particle Swarm Optimizer for Global Optimization Problems. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42 (3), pp. 627–646 (en). External Links: ISSN 10834419, 19410492, Link, Document Cited by: §1.
 An Adaptive MultiSwarm Optimizer for Dynamic Optimization Problems. Evolutionary Computation 22 (4), pp. 559–594 (en). External Links: ISSN 10636560, 15309304, Link, Document Cited by: §1.
 Competitive and cooperative particle swarm optimization with information sharing mechanism for global optimization problems. Information Sciences 293, pp. 370–382 (en). External Links: ISSN 00200255, Link, Document Cited by: §1.
 Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective RealParameter Numerical Optimization. pp. 32 (en). Cited by: §3.3, §3.4, §4, §4.
 Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation 10 (3), pp. 281–295. External Links: ISSN 1089778X, 1089778X, 19410026, Document Cited by: §1, §2.1, §2.1, §4.2, Table 4.

Niching particle swarm optimization based on Euclidean distance and hierarchical clustering for multimodal optimization
. Nonlinear Dynamics 99 (3), pp. 2459–2477 (en). External Links: ISSN 0924090X, 1573269X, Link, Document Cited by: §2.2.  Order2 Stability Analysis of Particle Swarm Optimization. Evolutionary Computation 23 (2), pp. 187–216 (en). External Links: ISSN 10636560, 15309304, Link, Document Cited by: §1.
 Coevolutionary Particle Swarm Optimization With Bottleneck Objective Learning Strategy for ManyObjective Optimization. IEEE Transactions on Evolutionary Computation 23 (4), pp. 587–602. External Links: ISSN 1089778X, 1089778X, 19410026, Document Cited by: §1.
 How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences 108 (22), pp. 9020–9025 (en). External Links: ISSN 00278424, 10916490, Link, Document Cited by: §1.
 Machine Truth Serum. arXiv:1909.13004 [cs, stat] (en). Note: arXiv: 1909.13004 External Links: Link Cited by: §2.3.
 Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm and Evolutionary Computation 24, pp. 11–24 (en). External Links: ISSN 22106502, Link, Document Cited by: §1, §2.1, §4.1, §4.2, Table 4.
 Solving spread spectrum radar polyphase code design problem by tabu search and variable neighbourhood search. European Journal of Operational Research 151 (2), pp. 389–399 (en). External Links: ISSN 03772217, Link, Document Cited by: §4.3.
 Renormalization group analysis of the smallworld network model. Physics Letters A 263 (46), pp. 341–346 (en). Note: arXiv: condmat/9903357 External Links: ISSN 03759601, Link, Document Cited by: §2.2.
 Measuring exploration/exploitation in particle swarms using swarm diversity. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pp. 1128–1134. Note: ISSN: 19410026 External Links: Document Cited by: §3.3.
 A solution to the singlequestion crowd wisdom problem. Nature 541 (7638), pp. 532–535 (en). External Links: ISSN 00280836, 14764687, Link, Document Cited by: §1, §2.3.
 A Novel Shortcut Addition Algorithm With Particle Swarm for Multisink Internet of Things. IEEE Transactions on Industrial Informatics 16 (5), pp. 3566–3577 (en). External Links: ISSN 15513203, 19410050, Link, Document Cited by: §2.2.
 A modified particle swarm optimizer. In 1998 IEEE International Conference on Evolutionary Computation Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98TH8360), pp. 69–73. External Links: Document Cited by: §1.
 Intuitive Biases in Choice versus Estimation: Implications for the Wisdom of Crowds. Journal of Consumer Research 38 (1), pp. 1–15 (en). External Links: ISSN 00935301, 15375277, Link, Document Cited by: §1.

Evolving Neural Networks through Augmenting Topologies
. Evolutionary Computation 10 (2), pp. 99–127 (en). Note: Number: 2 External Links: ISSN 10636560, 15309304, Link, Document Cited by: §1.  Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on RealParameter Optimization. Natural Computing, pp. 341–357 (en). Cited by: §4.4, §4.

Evolution of Deep Convolutional Neural Networks Using Cartesian Genetic Programming
. Evolutionary Computation 28 (1), pp. 141–163 (en). Note: Number: 1 External Links: ISSN 10636560, 15309304, Link, Document Cited by: §1.  Improving the search performance of SHADE using linear population size reduction. In 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, pp. 1658–1665 (en). External Links: ISBN 9781479914883 9781479966264, Link, Document Cited by: §4.2, Table 4.
 Self regulating particle swarm optimization algorithm. Information Sciences 294, pp. 182–202 (en). External Links: ISSN 00200255, Link, Document Cited by: §1.
 LatinPSO: An algorithm for simultaneously inferring structure and parameters of ordinary differential equations models. Biosystems 182, pp. 8–16 (en). External Links: ISSN 03032647, Link, Document Cited by: §4.4, §4.4, Table 4.
 Inferring structure and parameters of dynamic system models simultaneously using swarm intelligence approaches. Memetic Computing 12 (3), pp. 267–282 (en). External Links: ISSN 18659284, 18659292, Link, Document Cited by: §1, §4.4.
 The cellular particle swarm optimization algorithm. Cited by: §1.

CommitteeBased Active Learning for SurrogateAssisted Particle Swarm Optimization of Expensive Problems
. IEEE Transactions on Cybernetics 47 (9), pp. 2664–2677. External Links: ISSN 21682267, 21682275, Document Cited by: §1.  Collective dynamics of ‘smallworld’ networks. Nature 393 (6684), pp. 440–442. External Links: ISSN 14764687, Link, Document Cited by: §2.2.
 Ensemble of differential evolution variants. Information Sciences 423, pp. 172–186 (en). External Links: ISSN 00200255, Link, Document Cited by: §4.2.
 An expanded particle swarm optimization based on multiexemplar and forgetting ability. Information Sciences 508, pp. 105–120 (en). External Links: ISSN 00200255, Link, Document Cited by: §1, §4.2, Table 4.
 Triple Archives Particle Swarm Optimization. IEEE Transactions on Cybernetics 50 (12), pp. 4862–4875 (en). External Links: ISSN 21682267, 21682275, Link, Document Cited by: §1, §1.
 Particle swarm optimization based on dimensional learning strategy. Swarm and Evolutionary Computation 45, pp. 33–51 (en). External Links: ISSN 22106502, Link, Document Cited by: §1, §1, §4.2, Table 4.
 A LevelBased Learning Swarm Optimizer for LargeScale Optimization. IEEE Transactions on Evolutionary Computation 22 (4), pp. 578–594 (en). Note: Number: 4 External Links: ISSN 1089778X, 1089778X, 19410026, Link, Document Cited by: §1, §1.
 Adaptive Particle Swarm Optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 39 (6), pp. 1362–1381 (en). Note: Number: 6 External Links: ISSN 10834419, Link, Document Cited by: §1.
 Orthogonal Learning Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 15 (6), pp. 832–847. External Links: ISSN 19410026, Document Cited by: §1, §4.2, Table 4.
Comments
There are no comments yet.