Dynamic Swarm Dispersion in Particle Swarm Optimization for Mining Unsearched Area in Solution Space (DSDPSO)

07/02/2018
by   Anvar Bahrampour, et al.
0

Premature convergence in particle swarm optimization (PSO) algorithm usually leads to gaining local optimum and preventing from surveying those regions of solution space which have optimal points in. In this paper, by applying special mechanisms, suitable regions were detected and then swarm was guided to them by dispersing part of particles on proper times. This process is called dynamic swarm dispersion in PSO (DSDPSO) algorithm. In order to specify the proper times and to rein the evolutionary process alternating between exploring and exploiting behaviors, we used a diversity measuring approach and implemented the dispersion mechanism. To promote the performance of DSDPSO algorithm, three different policies including particle relocation, velocity settings of dispersed particles and parameters setting were applied. We compared the promoted algorithm with similar new approaches and according to the numerical results, the proposed algorithm outperformed the basic GPSO, LPSO, DMS-PSO, CLPSO and APSO in most of the 12 standard benchmark problems with different properties taken in this study.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

08/07/2013

A Multi-Swarm Cellular PSO based on Clonal Selection Algorithm in Dynamic Environments

Many real-world problems are dynamic optimization problems. In this case...
03/02/2013

Clubs-based Particle Swarm Optimization

This paper introduces a new dynamic neighborhood network for particle sw...
04/20/2015

Multi-swarm PSO algorithm for the Quadratic Assignment Problem: a massive parallel implementation on the OpenCL platform

This paper presents a multi-swarm PSO algorithm for the Quadratic Assign...
05/03/2022

cuPSO: GPU Parallelization for Particle Swarm Optimization Algorithms

Particle Swarm Optimization (PSO) is a stochastic technique for solving ...
05/24/2017

Object Tracking based on Quantum Particle Swarm Optimization

In Computer Vision domain, moving Object Tracking considered as one of t...
03/21/2018

Fine-tuning the Ant Colony System algorithm through Particle Swarm Optimization

Ant Colony System (ACS) is a distributed (agent- based) algorithm which ...
11/16/2017

A Modification of Particle Swarm Optimization using Random Walk

Particle swarm optimization comes under lot of changes after James Kenne...

1 Introduction

The particle swarm optimization (PSO) invented by Eberhart and Kennedy (1995a,b) is applied to the concept of social interaction for problem solving (Soleymani et al., 2007). Each particle, denoted by , represents a point in search space or a solution of the problem. The PSO algorithm iteratively modifies the point and the velocity of each particle as it searches for the optimal solution based on Equation 1.

(1)

Where in the first equation is the velocity of the th particle. The first part of Equation 1 () is the inertia of the previous velocity in which is predefined by the user. The second part () represents the cognition of the particle which shows personal thinking of the particle and the third part () is a social component. In this equation, and are acceleration constants. They represent weights of the stochastic acceleration terms which pull each particle toward the personal and global best positions. The constants and are the uniformly generated random numbers in interval (0, 1]. Although PSO is simple, it is a powerful search technique which has been reported to have a satisfactory performance according to many studies (Cheng and Shi, 2011).

The convergence rate of particles in PSO is good through the fast information flowing among particles, so its diversity decreases rapidly in the successive iterations and leads to a suboptimal solution. In this case, it is said that an evolutionary process has been trapped in a local optimum or premature convergence of the process has been occured.

The standard PSO algorithm can easily get trapped in a local optimum while solving complex multimodal problems. Such deficiencies have restricted the wider applications of the PSO (Zhan et al., 2009; Liang et al., 2006; Li and Engelbrecht, 2007). There are several reasons why such problem arises. One of the most important reasons is decreasing diversity of the population. A number of variants of PSO algorithm have been proposed to overcome the problem of diversity loss. One of the common methods to increase diversity is mutation. Mutation leads to the improvement of exploration abilities, which can be applied to different elements of a particle swarm. The effect of mutation depends on which elements of the swarm are mutated (Secrest and Lamont, 2003)

. Velocity vector mutation is equivalent to particle’s position vector mutation provided that the same mutation operator is considered.

Secrest and Lamont (2003) proposed a negative feedback mechanism into particle swarm optimization and developed an adaptive PSO as well. This mechanism takes advantage of the swarm-diversity to control the tuning of the inertia weight (PSO-DCIW), which in turn can adjust the swarm-diversity adaptively and make a contribution to a successful global search. There are other methods including Gaussian Mutation (Wei et al., 2002; Higashi and Iba, 2003; Secrest and Lamont, 2003; Sriyanyong, 2008; Krohling, 2005), Cauchy (Krohling, 2005; Stacey et al., 2003), and Chaos Mutation (Dong et al., 2008; Yang et al., 2009; Yue-Lin et al., 2008) which measure diversity and apply mutation in particles positions to improve the performance of the algorithm.

There are other ways to introduce diversity and to control its degree. Zhan et al. (2009) proposed an algorithm named APSO to do so. In this method, they utilized automatic control of algorithmic parameters. A learning strategy whereby all other particles’ historical best information was used to update a particle’s velocity was suggested by Liang et al. (2006) and called CLPSO . Riget and Vesterstrorm (2002) proposed an algorithm named ARPSO in which if diversity was above the predefined threshold , particles attracted each other; and if it was below , particles repeled each other until they met the required high diversity . Repulsion to keep particles away from the optimum was first proposed by Parsopoulos and Vrahatis (2004). Lovbjerg and Krink (2002) made dispersion among those particles which were too close to each other; and Blackwell and Bentley (2002) reduced the attraction of the swarm centers in order to prevent the particles from tight clustering in one region of the search space and to escape from local optimum. A dynamic multi-swarm particle swarm optimizer (DMS-PSO) was proposed by Liang and Suganthan (2005). In this method, the whole population was divided into many small swarms. These swarms were frequently regrouped by using various regrouping schedules and information was exchanged among the swarms.

The diversity level of the swarm during the evolutionary process was measured by Mohamad Nezami and Bahrampour (2013). In their study, once the diversity of the population drops down to the predefined threshold , the system of generating diversified artificial particles (DAP system) is activated and starts to replace some of the particles of swarm which have relatively bad fitness by generated artificial particles (DAP particles) which have high diversity and fairly good fitness. Bahrampour and Mohamad Nezami (2013) and Mohamad Nezami et al. (2013) investigated more profoundly and promoted this basic idea by three definitions and concepts including idle particles, relocation or dispersion terms and precise search in new regions of the search space by artificial particles. They proposed a mechanism to guide the swarm based on diversity by using a diversifying process in order to detect suitable positions of the search space (points with fairly good fitness and good distance from current distribution of the swarm particles) to disperse or relocate some of the existing idle particles (those particles that there has been no change in their personal best positions for long time) in the hope of increasing diversity level of the swarm and escaping from local optimum by detecting better area of the search space. The algorithm proposed by Mohamad Nezami and Bahrampour (2013) was improved by defining new velocity equation for artificial particles which was used in a limited duration after each replacement to search more carefully and precisely in new regions of the search space (Mohamad Nezami et al., 2013). In this paper, all the previous works are engaged and a comprehensive study is conducted on the behavior of PSO algorithm with the aforementioned ideas. In addition, we prove the policies and parameters used for designing dynamic swarm dispersion particle swarm optimization (DSDPSO) algorithm.

The article is organized as follows. In section 2, diversity and measuring are defined. The DSDPSO algorithm is described in section 3. Experimental results are discussed in section 4. Finally, conclusions are mentioned in section 5.

2 Diversity Definition and Measuring

Population diversity is a way to monitor the degree of convergence or divergence in PSO search process (Cheng and Shi, 2011). There are several measures to detect diversity level of the population. Shi and Eberhart (2008; 2009) and Zhan et al. (2010) gave three definitions for PSO population diversity measurements including position diversity, velocity diversity, and cognitive diversity. Cheng and Shi (2011) gave new definition for population diversity measurement, called norm, based on both element-wise and dimension-wise diversities. They showed that useful information on search process of an optimization algorithm could be obtained by using dimension-wise definition in norm variant. Therefore, we apply norm of position diversity measurement in this paper. Let be the number of particles and the number of dimensions. Dimension-wise definition in norm is defined as follows (Equation 2).


(2)

Where vector is mean of particle position on each dimension, is particle position diversity vector based on norm, and is the whole population diversity value.

Chang et al. (2010)

introduced other approaches to measure population diversity in evolutionary computation including Hamming distance, Euclidean distance, information Entropy, and so on. In this paper, we apply Euclidean distance in particles selection process to disperse them in dispersion mechanism. The Euclidean distance, defined as Equation

3, measures the distance between two particles and .

(3)

3 Dynamic Swarm Dispersion Particle Swarm Optimization

The problem of premature convergence in particle swarm optimization (PSO) algorithm often causes that the search process gets trapped in a local optimum. This problem usually occurs when the diversity of the swarm decreases and the swarm cannot escape from a local optimum. In this article, we periodically disperse some of the swarm’s particles to new suitable positions with fairly good fitness in the search space and with relatively far distance from convergence point. These new points in the search space are recognized based on the history of the search process in order to enhance the diversity of the swarm and to promote the exploration ability of the algorithm. In other words, the search process should periodically select some of the converged particles in current swarm and relocate them to new different points of the search space in order to probe new regions of the search space. Both selection process and the process of detecting new target positions of the relocated particles act based on the history of the search process up to dispersion time. In this approach, we do not change the previous personal best positions of dispersed particles in dispersion stage in order to use the result of the efforts made by relocated particles previously. Therefore, this is a relocation of the selected particles to new suitable positions rather than the replacement of them with new generated ones. Consequently, the diversity level of the swarm will be increased up to a certain degree. The evolutionary process will consistently reduce the diversity level again, and the dispersion process should be repeated periodically.

In PSO algorithm, the speed of convergence is very high, so the swarm dispersion process should be repeated frequently. On the other hand, repetition of this process is relatively time-consuming and exploitation ability of the algorithm will be decreased by high frequency of swarm dispersion, too. Therefore, we introduce a new parameter to define the period of the dispersion system call in new search process. Another new parameter in DSDPSO algorithm which defines the amount of the swarm and should be relocated in each dispersion call is dispersion rate .

The architecture of a DSDPSO is illustrated in Figure 1

. In DSDPSO, we partially disperse swarm to prevent from premature convergence and to extend those search spaces which are probably unexplored in the simple PSO algorithm. In Figure

1, there are two different phases in process of dynamic swarm dispersion in particle swarm optimization algorithm. The first phase, which has been illustrated on the left side of Figure 1, is a simple PSO plus one last step for updating an external archive to be used in dispersion mechanism. In DSDPSO, we normally execute phase one in every iteration. In the second phase, shown on the right side of Figure 1, the selected particles of the swarm are dispersed to the new positions in search space in each period. This phase will enhance the swarm diversity by relocating of the swarm in search space in order to escape from local optimum.

Figure 1: Architecture of DSDPSO

Figure 2 shows the process of dynamic swarm dispersion particle swarm optimization(DSDPSO) algorithm. The steps of this process are identical to the steps of the standard PSO except for steps 5 and 6. In order to detect target positions of the selected particles for dispersion, we apply the information collected from previous generations of the PSO process using the previous global best particles to determine good points in the search space. To do so, we develop an external archive to store the global best particles of previous generations as good positions in the search space which have been visited in the hope of finding better points in the regions in which these stored particles are located. In step 4, we update the external archive if necessary; since there is no necessity for the external archive to be updated in every iteration (the process updates external archive only when notable improvement occurs in the fitness of the global best particle).

When dispersion system is called in step 5, last dispersion has taken place generations ago. At this time, dispersion process selects of the swarm’s particles, determines the same number of target positions, and relocates them to new positions. Dispersion process will increase the swarm diversity by relocating selected particles to new potent positions. The process of determining new target positions and selecting policy of swarm’s particles for relocation will be described in the following sections. The final step of dispersion process is to reset the velocity of dispersed particles to zero because we need each dispersed particle for a very careful search in order to find better solutions in the vicinity of new location. In this paper, we found out that the previous nonzero velocities of dispersed particles are probably the cause of rapid skipping from new region and subsequently having some areas remain unsearched in the search space. This idea will be verified in section 3.2.2.

In step 6, we use two different velocity equations, Equation 1 and Equation LABEL:eq:4 to update velocity of the swarm particles. The following velocity equation is used only for those dispersed particles generated in the latest dispersion process call. Since we want to search new regions more carefully by relocated particles, we use the minimum inertia weight. Moreover, since relocated particles are located in the regions with probably far distance from the converged particles, it is likely to have a big component of velocity to the global best particle in velocity equation. Thus, we use a random coefficient in interval (0, 1) in new velocity equation in order to relocate the particles slowly attracted to the converged swarm. This idea is verified in section 3.2.3.

(4)
Figure 2: Steps of DSDPSO algorithm

To illustrate the impact of dispersion mechanism on diversity level of the swarm in DSDPSO, we use a 2-D Rastringin function ( in Table 1) with the swarm of 30 particles and the dispersion rate of . The swarm distribution in different running phases of this experiment are shown in Figure 3. The stochastic initialization of particles in the first iteration is illustrated in Figure 3(a). Then, the learning mechanism of the algorithm pulls particles toward the optimal region in the th iteration, as illustrated in Figure 3(b). In the th iteration, DSDPSO algorithm relocates some particles of the current swarm to increase the swarm diversity based on dispersion mechanism as shown in Figure 3(c). In this stage, the new swarm is named dispersed swarm. Figure 3(e) and Figure 3(f) show behaviors of the swarm in the th iteration same as Figure 3(b) and Figure 3(c) respectively. Figure 3(c) and 3(f) illustrate the achievement of dispersion mechanism for dispersing particles throughout the search space properly. It is noteworhty that in dispersion mechanism despite the increasing diversity of the swarm, the fitness level of new positions is not worse than the particles fitness before relocating. So, in DSDPSO algorithm, target positions can be generated with greater diversity and better fitness as well. Figure 3(d) and 3(g) substantiate this claim.

Finally, Figure 4 represents the diversity curves of standard global PSO and DSDPSO. It shows how diversity level of the swarm changes in GPSO and DSDPSO in 100 iterations.

Figure 3: (a) Stochastic distribution of swarm in generation 1. (b) Swarm distribution in iteration 30. (c) Swarm distribution in iteration 30 after dispersion call. (d) Fitness diagrams of swarm of b (converged pop) and c (dispersed pop) states. (e) Swarm distribution in iteration 60. (f) Swarm distribution in iteration 60 after dispersion call. (g). Fitness diagrams of swarm of e (converged pop) and f (dispersed pop) states.
Figure 4: Variation of Swarm diversity for Function in GPSO and DSDPSO

3.1 Target Positions of Selected Particles

In this section, we describe a mechanism for determining target positions of the selected particles to disperse over the search space. In this paper, we have established an external archive with particles and initialized it with random particles. Firstly, we gather those particles having the best fitness in the first generations of the PSO process and use them to replace the particles in the external archive having bad fitness. Then, we should establish a replacement policy in order to gather effective particles in external archive. These particles should have both good fitness and considerable distance from the center of current distribution of the external archive particles to avoid the convergence of external archive. In this study, after the first generations, replacement is applied only when the fitness of global best particle changes remarkably. In that case, one of the particles with low diversity will be removed by the new global best particle of the current swarm. To detect particles with low diversity and remove them from external archive, we use Euclidean distance described in section 2 and measure distance of each particle from the mean of the external archive particles.

Figure 5 illustrates the mechanism of determining target positions . To determine the new good positions for relocating converged particles of the swarm from information of the external archive, two new particles of and (vector of maximum or minimum value in each dimension) are added to external archive for mutation purpose. Then, a Roulette wheel is created to weigh each particle of the external archive based on its fitness and distance from the center of the swarm. In order to detect new target position in the search space, we should determine the value of each dimension of target position. Therefore, for each dimension value, one Roulette wheel selection can select a particle of the external archive, and the value stored in the same dimension of the selected particle will be used to generate the same dimension value of new target position. After selecting the value of each dimension, we probably have a new suitable position for dispersion process. However, we do not use this point as good position for dispersion this time. We collect all the generated points ( points in this study) in one matting pool and add the external archive particles as good points in the search space to the pool, too. Then, operators such as genetic crossover and mutation are applied to produce new points with probably good fitness and good diversity. Afterwards, a numbers of best points ( of the swarm in this research) are selected based on their fitness and distance from the center of the current swarm distribution and returned to dispersion process to relocate randomly the same numbers of selected particles of the swarm to these new positions. This process will remarkably increase diversity of the swarm and help to escape from local optimum trap. Figure 6 illustrates this process.

Figure 5: Steps of detecting target positions in DSDPSO algorithm
Figure 6: Process of determining target positions

3.2 Performing Policies

As mentioned in the previous section, to promote performance of the DSDPSO algorithm, we have to properly perform several policies. In this section, we study three policies including Relocation policy, initial velocity of dispersed particles, and velocity equation of dispersed particles. These policies will be elaborated in the following three subsections. In the first subsection, we explain about different methods for selecting particles of current population for relocation and then compare them together. In the second subsection, initial velocity of dispersed particles will be explained. Finally, in the last subsection, we test different policies to control behavior of the dispersed particles and prove their abilities. In order to comparison of different approaches in each policy, we have used a collection of 10 standard benchmark problems. Mathematical models of the problems along with the true optimal value are given in Table 1(). All the experiences mentioned in these subsections have been achieved under the same conditions. Because of comparing the approaches in high stability, ten standard problems with different properties were chosen to test the approaches, and average of 20 runs was applied for each one. The same initial population is used for all algorithms. The population size is specified 20, and there are 30(dimensions) for all the test problems. A linearly decreasing inertia weight starting at 0.9 and ending at 0.4 is used with the user defined parameters and . For each algorithm, the maximum number of iterations is set to 3000. In all of the evaluations, a new parameter is set to 30, the external archive of size 100, and dispersion rate of 45% are specified.

Function Function Definition Interval Optimum
Sphere Function [-100,100] 0
Schwefel Function 1.2 [-100,100] 0
Schwefel’s Problem 1.2 with Noise [-100,100] 0
Noisy Function [-1.28,1.28] 0
Rosenbrock Function [-30,30] 0
Schwefel Function [-500,500] 0
Rastrigin Function [-5.12,5.12] 0
Noncaontinuous Rastrigin Function [-5.12,5.12] 0
     
    
Shaffer’s Function [-32.767,32.767] 0
Griewank Function [-600,600] 0
Rotated Ackley Function [-32,32] 0
      
Rotated Rastrigin’s Function [-5.12,5.12] 0
      
Table 1: Benchmark functions used in our experimental study

3.2.1 Particles Relocation Policy

Selecting particles and relocating them to new positions, which was completely explained in section 3.1, is performed by three approaches. In the first approach, particles with low fitness are selected for dispersion. Since probability of particles with low fitness is too low to find the optimal points of the search space, this approach is applied. It is supposed that regions around these particles are not probably suitable enough to find global optimum. In the next approach, idle particles will be relocated to new points in the search space. Idle particle is a particle that there is no change in its personal best position for long period of time; therefore, this particle could not probably find better locations.

Each of these two approaches might have some shortcomings; in the first approach, we disperse particles with low fitness, even though these particles may find better locations in subsequent generations and their fitness would be improved alternatively. In the second approach, the particles are sometimes relocated because of there is no change in their local best positions for a while, regardless to the fact that they may have high fitness. To avoid such problems, we introduce the third approach. This approach is a combination of the two previous ones. In the hybrid approach, particles with relatively low fitness and with no change in their local best positions for a long term are dispersed. Results of these approaches are indicated in Table 2. According to the results, the hybrid approach leads to the best policy for dispersion.

Relocation Low Fitness Idle Particle Hybrid
Table 2: The results of particles relocation approaches

3.2.2 Velocity Settings of Dispersed Particles

When particles are dispersed to new positions in the search space, some of their features such as previous velocities (initial velocity after relocation), the local best positions and inertia weight are unknown. For previous velocity of a dispersed particle, we have three options including final velocity of the particle just before relocation, a random velocity, and a zero velocity. However, it is not reasonable to use previous velocity of the particle, final velocity just before relocation, in computing new velocity of the particle in new position. Random velocity causes the dispersed particles to scat from the detected new region of the search apace as the new positions are intentionally selected based on the probability of finding optimal point near new target positions. In order to avoid the latter problem and search carefully in new region, we test zero initial velocity in dispersed particles after dispersion. These three approaches are evaluated by the mentioned test functions and results are illustrated in Table 3. In this experiment, the particles with low fitness are selected for dispersion. According to the results, setting the initial velocities to zero in dispersion stage is the best approach.

Initial Velocity Random Velocity Zero Velocity Previous Velocity
Table 3: Results of the three approaches for calculating rudimentary velocities

3.2.3 Behavior of Dispersed Particles

As mentioned in Section 3, since we want to search more carefully new regions of the search space by relocated particles, the dispersed particles should slowly move toward the global and local best particles. Therefore, big components of the velocity equation (Equation 1) violate this aim, so we use the minimum inertia weight for dispersed particles for a period after dispersion. Moreover, since the relocated particles are located in the regions with probably far distance from the converged particles, it is likely to have a big component of velocity to the global and local best particles in velocity equation. Thus, we use a random coefficient in interval (0, 1) in new velocity equation (Equation 4) in order to relocate particles slowly attracted to the converged swarm. Equation 4 is applied to update velocities of the dispersed particles for iterations after dispersion stage.

Table 4 shows that the new equation (Equation 4) outperforms the standard velocity equation (Equation 1) and low value of inertia weight in finding optimal particles. In this experiment, particles with low fitness are selected for dispersion, and the dispersed particles restart search process by initial zero velocity.

Velocity PSO Eq. 1 PSO Eq. 1 with Low inertia Proposed Eq. 4
Table 4: Results of the three approaches for calculating velocities

3.3 Parameter Setting

In order to test the effectiveness of the dispersion mechanism, a set of experimental tests were conducted to setup parameters, introduced in section 3.1, including period , dispersion rate , and the size of the external archive. All the experiences in this subsection have been achieved under the same conditions. To setup parameters in high stability, average of 20 runs was applied for each one. Ten standard problems with different properties were used to test the parameter values. First, the experiment was designed to setup parameter . In these experiments, values of in terms of generation were set to 10, 30, 50, and 70. The final results of these experiments are shown in Table 5. As we expected the lower value of led to search better fitness in the search space. Secondly, the dispersion rate was experimented by four different values of 15, 30, 45, and 60 percentage of the swarm. Table 6 shows the results of these experiments, the best result reached by dispersion of 45% of swarm. Table 7 illustrates the results of the experiment by different external archive size. Under the condition of these experiments, the external archive with 100 particles is the best setting for this parameter.


Period 10 30 50 70
Table 5: Results of applying different values for parameter

Dispersion Rate R 15% 30% 45% 60%
Table 6: Results of applying different dispersion rates ()
Size of External Archive 50 100 150 200
Table 7: Results of applying different sizes of external archives

4 Experimental Setting and Numerical Results

In order to compare some variants of PSO and DSDPSO algorithms, we have used a collection of 12 standard benchmark problems. Mathematical models of the problems along with the true optimal value are given in TABLE 2. In this problem set, we have unimodal functions such as , , and .

is a noisy quadric function where a uniformly distributed random variable is in the interval [0 , 1). The others are unrotated and rotated multimodal functions

(Yao et al., 1999). The entire set of test problems taken for this study is scalable. In other words, the problems can be tested for any number of variables. However, in the present study, we have tested the problems for dimensions 30 and 50.

In order to make a fair comparison between the proposed DSDPSO and other variants of PSO algorithm, we implement standard PSO algorithm in both global star structure and local ring structure named GPSO and LPSO respectively. In addition, we implement APSO, CLPSO and DMS-PSO algorithms proposed in Liang and Suganthan (2005); Liang et al. (2006); Zhan et al. (2009) and compare them with DSDPSO. The same initial population is used for all algorithms. The population size is specified 20 and 50 when there are 30 and 50 variables (dimensions) respectively for all the test problems. A linearly decreasing inertia weight starting at 0.9 and ending at 0.4 is used with the user defined parameters and . For each algorithm, the maximum number of iterations is set to 3000 in the case of having 30 dimensions, and 10000 for dimension 50. For DSDPSO algorithm, a new parameter is set to 30 and 50 in case of having 30 and 50 dimensions respectively. The external archive of size 100 and dispersion rate of 45% are specified. In DMS-PSO, a group size of 3 and regroup period of 5 are applied. A total of 20 runs are conducted for each experimental setting. The results are given in Table 8 and Table 9

in terms of mean best fitness, standard deviation, and P-Value. Figure

7-11 show performance curves of the DSDPSO in comparison with other variants of PSO for test functions , , , , and by mean fitness of the best particles history found by the swarm in all runs. The numerical results show that the proposed algorithm outperforms other variants of PSO in most of the test cases taken in this study.

Figure 7: Performance curves of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for function
Figure 8: Performance curves of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for function
Figure 9: Performance curves of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for function
Figure 10: Performance curves of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for function
Figure 11: Performance curves of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for function

5 Conclusion

Evolutionary algorithms (EAs) are the best solutions for solving optimization problems. Although having different abilities to investigate the search space and attain optimal solution, they all behave similarly. One of the ideas to control the behavior of these algorithms is a rein between exploration and exploitation. In such case, we need a good mechanism to enhance the diversity of population in different stages to achieve the unsearched spaces. In order to enhance diversity and survey unsearched spaces, we used a historical approach to search and implemented that on the PSO algorithm. We proposed a mechanism to guide the swarm based on diversity by using a dispersion process in order to detect suitable positions in the search space. This model uses a dispersion mechanism to control the evolutionary process alternating between exploring and exploiting behavior and guide the algorithm, called DSDPSO algorithm, to survey the unsearched spaces. The numerical results showed that the proposed algorithm outperformed the basic GPSO, LPSO, DMS-PSO, CLPSO and APSO in most of the test cases with different properties taken in this study. It is of no doubt that this model can be used on other EAs with a little modification.

Acknowledgements

The authors would like to thank Dr. Ponnuthurai Nagaratnam Suganthan for sending his implementations which helped us to improve the quality of our paper.

References

  • Bahrampour and Mohamad Nezami (2013) Bahrampour, A. and O. Mohamad Nezami (2013). Diversity guided particle swarm optimization (DGPSO) algorithm based on search space awareness particle dispersion.

    Proceedings of the 15th International Conference on Artificial Intelligence (ICAI 2013)

    .
    Volume 2, pp. 218-224. Las Vegas Nevada, USA.
  • Blackwell and Bentley (2002) Blackwell, T. M. and P. Bentley (2002). Don’t push me! Collision-avoiding swarms. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002). Volume 2, pp. 1691-1696. Honolulu, USA.
  • Chang et al. (2010) Chang, P. C., W. H. Huang, and C. J. Ting (2010).

    Dynamic diversity control in genetic algorithm for mining unsearched solution space in TSP problems.

    Expert Systems with Applications 37(3), 1863-1878.
  • Cheng and Shi (2011) Cheng, S. and Y. Shi (2011). Diversity control in particle swarm optimization. Proceedings of the IEEE Symposium on Swarm Intelligence (SIS 2011). pp. 1-9. Paris, France.
  • Dong et al. (2008) Dong, D., J. Jie, J. Zeng, and M. Wang (2008). Chaos-mutation-based particle swarm optimizer for dynamic environment.

    Proceedings of the 3rd International Conference on Intelligent System and Knowledge Engineering (ISKE 2008)

    .
    Volume 1, pp. 1032-1037. Xiamen, China.
  • Eberhart et al. (1996) Eberhart, R., R. Dobbins, and P. Simpson (1996). Computational Intelligence PC Tools. Academic Press Professional.
  • Eberhart and Kennedy (1995a) Eberhart, R. and J. Kennedy (1995a). A new optimizer using particle swarm theory. Proceedings of the 6th International Symposium on Micro Machine and Human Science. pp. 39-43. Nagoya, Japon.
  • Higashi and Iba (2003) Higashi, N. and H. Iba (2003). Particle swarm optimization with Gaussian mutation. Proceedings of the IEEE Swarm Intelligence Symposium. pp. 72-79. Indianapolis, USA.
  • Jie et al. (2006) Jie, J., J. Zeng, and C. Han (2006). Adaptive particle swarm optimization with feedback control of diversity. Proceedings of the International Conference on Intelligent Computing (ICIC 2006). pp. 81–92. Kunming, China.
  • Kennedy and Eberhart (1995b) Kennedy, J. and R. Eberhart (1995b). Particle swarm optimization.

    Proceedings of the IEEE International Conference on Neural Networks (ICNN 1995)

    .
    Volume 4, pp. 1942-1948. Perth, Australia.
  • Krohling (2005) Krohling, R. A.(2005). Gaussian particle swarm with jumps. Proceedings of the IEEE Congress on Evolutionary Computation. pp. 1226-1231. Edinburg, Scotland.
  • Li and Engelbrecht (2007) Li, X. and A. P. Engelbrecht (2007). Particle swarm optimization: An introduction and its recent developments. Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2007). pp. 3391-3414. London, UK.
  • Liang and Suganthan (2005) Liang, J. J. and P. N. Suganthan (2005). Dynamic multi-swarm particle swarm optimizer with local search. Proceedings of the IEEE Congress on Evolutionary Computation. Volume 1, pp. 522–528. Edinburgh, Scotland.
  • Liang et al. (2006) Liang, J. J., A. K. Qin, P. N. Suganthan, and S. Baskar (2006). Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary Computation 10(3), 281-295.
  • Lovbjerg and Krink (2002) Lovbjerg, M. and T. Krink (2002). Extending particle swarm optimisers with self-organized criticality. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002. Volume 2, pp. 1588-1593. Honolulu, USA.
  • Mohamad Nezami and Bahrampour (2013) Mohamad Nezami, O. and A. Bahrampour (2013). Particle swarm optimization algorithm based on diversified artificial particles (PSO-DAP). Proceedings of the 11th Iranian Conference on Intelligent Systems. Tehran, Iran.
  • Mohamad Nezami et al. (2013) Mohamad Nezami, O., A. Bahrampour, and P. Jamshidlou (2013). Dynamic diversity enhancement in particle swarm optimization (DDEPSO) algorithm for preventing from premature convergence. Proceedings of the 17th Asia Pacific Symposium on Intelligent and Evolutionary Systems (IES). Seoul, South Korea.
  • Parsopoulos and Vrahatis (2004) Parsopoulos, K. E. and M. N. Vrahatis (2004). On the computation of all global minimizers thorough particle swarm optimization. IEEE Transactions on Evolutionary Computation 8(3), 211-224.
  • Riget and Vesterstrorm (2002) Riget, J. and J. S. Vesterstrorm (2002). A Diversity guided particle swarm optimizer–The ARPSO. Technical Report. Department of Computer Science, University of Aarhus, Denmark.
  • Secrest and Lamont (2003) Secrest, B. R. and G. B. Lamont (2003). Visualizing particle swarm optimization–Gaussian particle swarm optimization. Proceedings of the IEEE Swarm Intelligence Symposium (SIS 2003). pp. 198-204. Indianapolis, USA.
  • Shi and Eberhart (2009) Shi, Y. and R. Eberhart (2009). Monitoring of particle swarm optimization. Frontiers of Computer Science in China 3(1), 31-37. SP Higher Education Press.
  • Shi and Eberhart (2008) Shi, Y. and R. Eberhart (2008). Population diversity of particle swarms. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2008). pp. 1063-1067. Hong Kong.
  • Soleymani et al. (2007) Soleymani, S., A. M. Ranjbar, S. Bagheri Shoukari, A. R. Shirani, and N. Sadati (2007). A new approach for bidding strategy of gencos using particle swarm optimization combined with simulated annealing method. Iranian Journal of Science & Technology, Transaction B, Engineering. Volume 31, No. B3, pp. 303-315. Shiraz University, Iran.
  • Sriyanyong (2008) Sriyanyong, P.(2008). Solving economic dispatch using Particle Swarm Optimization combined with Gaussian mutation. Proceedings of the 5th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON 2008). Volume 2, pp. 885-888. Krabi, Thailand.
  • Stacey et al. (2003) Stacey, A., M. Jancic, and I. Grundy (2003). Particle swarm optimization with mutation. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2003). Volume 2, pp. 1425-1430. Canberra, Australia.
  • Wei et al. (2002) Wei, C., Z. He, Y. Zhang, and W. Pei (2002). Swarm directions embedded in fast evolutionary programming. Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002). Volume 2, pp. 1278-1283. Honolulu, USA.
  • Yang et al. (2009) Yang, M., H. Huang, and G. Xiao (2009). A novel dynamic particle swarm optimization algorithm based on chaotic mutation. Proceedings of the 2nd International Workshop on Knowledge Discovery and Data Mining (KDD 2009). pp. 656-659. Moscow, Russia.
  • Yao et al. (1999) Yao, X., Y. Liu, and G. Lin (1999). Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation 3(2), 82-102.
  • Yue-Lin et al. (2008) Yue-Lin, G., A. Xiao-hui, and L. Jun-min (2008). A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. Proceedings of the Conference on Computational Intelligence and Security (CIS 2008). Volume 1, pp. 61-65. Suzhou, China.
  • Zhan et al. (2009) Zhan, Z. H., J. Zhang, Y. Li, and H. S. Chung (2009). Adaptive particle swarm optimization. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics 39(6), 1362-1381.
  • Zhan et al. (2010) Zhan, Z., J. Zhang, and Y. Shi (2010). Experimental study on PSO diversity. Proceedings of the 3rd International Workshop on Advanced Computational Intelligence (IWACI 2010). pp. 310-317. Suzhou, China.

Appendix 1 – Comparison Tables

Test Functions GPSO LPSO DMS-PSO CLPSO APSO DSDPSO
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean 1.1476e+02
Std. Dev 1.1291e+02
P-value 2.2105e-04
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean 1.0320e-02
Std. Dev 1.8416e-02
P-value 2.1462e-02
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Table 8: Comparison results GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for 20 particles of 30 dimensions in 3000 iterations
Test Functions GPSO LPSO DMS-PSO CLPSO APSO DSDPSO
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Mean
Std. Dev
P-value
Table 9: Comparison results of GPSO, LPSO, DMS-PSO, CLPSO, APSO and DSDPSO for 50 particles of 50 dimensions in 10000 iterations