Diversity Enhancement for Micro-Differential Evolution

12/25/2015
by   Hojjat Salehinejad, et al.
0

The differential evolution (DE) algorithm suffers from high computational time due to slow nature of evaluation. In contrast, micro-DE (MDE) algorithms employ a very small population size, which can converge faster to a reasonable solution. However, these algorithms are vulnerable to a premature convergence as well as to high risk of stagnation. In this paper, MDE algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while empowers the exploration ability of mutation factor through randomizing it in the decision variable level. The idea is supported by analyzing mutation factor using Monte-Carlo based simulations. To facilitate the usage of MDE algorithms with very-small population sizes, new mutation schemes for population sizes less than four are also proposed. Furthermore, comprehensive comparative simulations and analysis on performance of the MDE algorithms over various mutation schemes, population sizes, problem types (i.e. uni-modal, multi-modal, and composite), problem dimensionalities, and mutation factor ranges are conducted by considering population diversity analysis for stagnation and trapping in local optimum situations. The studies are conducted on 28 benchmark functions provided for the IEEE CEC-2013 competition. Experimental results demonstrate high performance and convergence speed of the proposed MDEVM algorithm.

READ FULL TEXT VIEW PDF

page 22

page 23

09/08/2017

Opposition based Ensemble Micro Differential Evolution

Differential evolution (DE) algorithm with a small population size is ca...
09/20/2021

An Enhanced Differential Evolution Algorithm Using a Novel Clustering-based Mutation Operator

Differential evolution (DE) is an effective population-based metaheurist...
07/01/2012

Alternative Restart Strategies for CMA-ES

This paper focuses on the restart strategy of CMA-ES on multi-modal func...
05/11/2018

An Adaptive Population Size Differential Evolution with Novel Mutation Strategy for Constrained Optimization

Differential evolution (DE) has competitive performance on constrained o...
11/20/2020

Orthogonal Learning Harmonizing Mutation-based Fruit Fly-inspired Optimizers

The original fruit fly optimizer (FOA) has two core disadvantages: slow ...
08/10/2016

Escaping Local Optima using Crossover with Emergent or Reinforced Diversity

Population diversity is essential for avoiding premature convergence in ...
05/18/2004

Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head

This paper analyzes the relative advantages between crossover and mutati...

1 Introduction

Evolutionary algorithms are practical tools for solving real-world problems, f1 . Accuracy enhancement and increasing the convergence speed toward finding the global solution(s) in optimization algorithms have motivated many researchers to develop more efficient evolutionary approaches. The differential evolution (DE) algorithm is one of the state-of-the-art global optimization algorithms, which is popular due to its simplicity and effectiveness. This algorithm works based on a set of individuals, called population, where an optimal size setting is imperative for a good performance 1 . The centroid-based approach is one of the successful approaches for DE algorithm, which works based on the computing the centroid of population cen1 , cen2 . Opposition based computing is another approach which has a great potential in performance improvement of DE algorithm t2 .

Different variants of DE algorithm with large population size often grant more reasonable results than their small population size versions. A large population size supports a higher diversity for the population, which recombination of its diverse members offers a higher exploration ability to the optimizer to find global solution(s) 2 -4 . The proposed diversity enhancement technique in this paper offers a better exploration of problem landscape. Most of the research works during past decades were focused on developing complex approaches with a large populations size 38 . Utilizing a large population size intrinsically encompasses more function evaluations and as a consequence, and naturally a lower convergence rate 2 . Therefore, using algorithms with a large population size may not be satisfactory for real-time or on-line applications 37 , 39 .

Using a population size much smaller than the number of decision variables is sometimes more efficient than some of the state-of-the-art DE algorithms with a large population. The term micro-algorithm, denoted by -algorithm, refers to population-based algorithms with a small population size 4 . The micro-algorithms have been used in diverse applications, exceptionally due to their lighter hardware requirements and opportunity to operate in embedded systems with a memory saving approach 1 . Employing small population sizes decreases the number of function calls, but unfortunately due to lack of diversity, it also increases the risk of premature convergence as well as stagnation.

The premature convergence problem refers to the situation where the population has converged to a sub-optimal solution of a multimodal objective function 2 . This situation mostly occurs when the population has lost its diversity and cannot jump out of local optima. In this case, the algorithm progresses slower than usual and may stop any further improvement of the evolved candidate solutions 2 , 39 .

In stagnation scenario, the population diverts from the correct path toward optimality and not converged to any local optima or any other point as the algorithm proceeds. In this case, the population keeps a certain level of diversity through generations. Even adding new individuals to the population or updating the current individuals may not guide the algorithm toward convergence 2 . A sign of this behaviour is static proceeding of the found candidate solutions, while the individuals are operating on the problem landscape over generations 39 .

Based on the stagnation and premature convergence characteristics, it seems reducing the population size while raising the diversity of the population is a key point to achieve a faster convergence speed while maintaining a low risk of premature convergence or stagnation 2 , 39 . The DE algorithm is consisted of several manually tuneable control parameters, where different adaptive proposals have been devised to avoid manual adjustments 61 . One way to increase the diversity of population while keeping its convergence toward global solution(s) is using intelligent adaptive techniques.

Mutation factor is one of the critical parameters that is usually set by user Segura_2015 . A simple modification to overcome the stagnation and pre-mature convergence problems is using randomized values for the mutation factor Das_2005 , Das_2005a , 39 . The authors have recently proposed the idea of vectorized random mutation factor in DE algorithm for each decision variable of the problem, called MDEVM 39 . This algorithm has been recently cited as one of the competitive algorithms in the literature Brown_2015 . In this paper, we provide a comprehensive survey on micro-EAs. The proposed MDEVM method is discussed in deep and supporting Monte-Carlo simulations to analyze mutation factor diversity are presented. For the first time, we propose a mutation scheme that can work for very-small population size (i.e. ) and comparative analysis on variant problem dimensions and mutation schemes for the MDE algorithms are presented. The considered benchmark problems are a set of 28 functions that cover uni-modal, multi-modal, and composite problems from CEC-2013. The studies are continued on variant ranges for mutation factors, population diversities, and variant stopping conditions for the MDE algorithm.

In the next section, the micro-population based methods are briefly surveyed. Then, a review of the DE algorithm is presented in Section 3. In Section 4, the proposed method is presented, and diversity enhancement in MDE using different structures of mutation factor is studied in detail. The simulation results and corresponding analysis are provided in Section 5. Finally, the paper is concluded in Section 6.

2 Related Works

Many research works have attempted to introduce efficient micro-algorithms. The research works can be categorized in four main groups which are micro-genetic algorithms (micro-GAs), micro-particle swarm optimization (micro-PSO), MDE, and other population-based approaches.

2.1 Micro-Genetic Algorithms

One of the earlier research work in this direction was a genetic algorithm (GA) with five chromosomes 5 . The strategy in this micro-GA is to copy the best found chromosome from the current population to the next generation. This work was tested on low-dimensional problems, which resulted a faster convergence speed compared to the classical GA. The idea of population reinitialization for micro-GA was another early work in the field 31 . In this approach, the best individual of each converged population, after a predefined number of generations, is replaced with a randomly selected individual in the population of the next iteration. The parallel version of micro-GA, called parallel micro-genetic algorithm (PMGA), was reported in 32 ; which solves the ramp rate constrained economic dispatch (ED) problems for generating units with non-monotonically and monotonically increasing incremental cost functions. The PMGA is implemented on a thirty-two-processor Beowulf cluster and the reported results demonstrate feasibility of this approach in online applications. The micro-algorithms also have been employed in multi-objective optimization (MOO). The improved version of nondominated sorting genetic algorithm (NSGA-II) with a specific population initialization strategy are embedded into the standard micro-GA to solve the MOO problems 10 . A micro-GA with a population size of four and a reinitialization strategy is proposed in 28 which can produce a major part of the Pareto front at a very low computational cost. Three forms of elitism and a memory are used to generate the initial population 28 . An improved version of micro-GA, archive-based micro-GA (AMGA2) for constrained MOO is proposed in 33

. This algorithm is based on a steady-state GA that preserves an external archive of best and divert candidate solutions. This small population-based approach facilitates the decoupling of the working population, the external archive, and the number of required solutions as the outcome of the optimization procedure. A model of MOO for hierarchical GA (MOHGA) based on the micro-GA approach for modular neural networks (MNNs) optimization is proposed in

34 . This approach is used in iris recognition. The MOHGA divides the input data into granules and sub-modules and then decides to split the data for training and testing phases. It is reported that this technique can obtain good results based on using less data 34 . The micro-GA has also been used for local fine tuning in an adaptive local search intensity manner for training recurrent artificial neural networks (ANN) 35 . It is reported that this approach is useful for systems identification tasks. In 21 a multi-objective micro genetic extreme learning machine (MOMG-ELM) is proposed, which provides the appropriate number of hidden nodes in the machine for solving the problem, which minimizes the mean square error (MSE) of the training phase. The micro-GA is applied successfully for many applications such as designing wave-guide slot antenna with dielectric lenses 36 , detection of flaws in composites 30 , and scheduling of a real-world pipeline network 29 , where better performances compared to the standard GA are reported.

2.2 Micro-Particle Swarm Optimization

The particle swarm optimization (PSO) is one of the well-known swarm intelligence algorithms, which its small population size versions have been developed 6 , 51 , 7 . The Coulomb’s law is used in micro-PSO method for high dimensional problems 6 . First achievement of this approach is removal of the burden for determining the suitable size of space needed to enclose the blacklisted solutions and the amount of repulsion needed to repel the particles, as these parameters are extremely difficult to determine for high dimensional problems. The other achievement is the flexibility of controlling the repulsion on particles through the use of a parameter which controls the amount of repulsion experienced by the particles at a particular position.The conducted simulations’ results on five high-dimensional benchmark functions demonstrate superior performance of micro-PSO versus the standard PSO with a large populations size. A five-particle micro-PSO is used in 41 to deal with constrained optimization problems. This method preserves population diversity by using a reinitialization process and incorporates a mutation operator to improve the exploratory capabilities of the algorithm. The reported results present competitive performance versus the simple multi-membered evolution strategy (SMES) and stochastic ranking (SR) method 41 . The micro-PSO was employed for MOO in 42 ; it produces reasonably good approximations of the Pareto front of moderate dimensional problems with a small number of objective function evaluations (only 3000 calls per run), comparing to PSO approach. In another micro-PSO algorithm, a parallel master-slave model of cooperative micro-PSO was introduced 7 , in which the original search space is decomposed into subspaces with smaller dimensions. Then, five individuals are considered in each subspace to identify suboptimal partial solution components. Its performance was assessed on a set of five widely used test problems with significant improvements in solution quality, compared to the standard PSO algorithm 7 . A cooperative PSO approach was proposed in 47 which uses a company of low-dimensional and low-cardinality sub-swarms to deal with complex high-dimensional problems. Promising results are reported using this methods, tested with five widely used test problems. A clonal selection algorithm (CSA), which belongs to the family of artificial immune system (AIS), in conjunction with a micro-PSO (CSPSO) was introduced in 46 as a hybrid scheme. In this hybridization, the strength of standard PSO algorithm is enhanced, where the micro-PSO helps to find the optimum solution with less memory requirement and the CSA increases the exploration capability while reducing the chance of convergence to a local minima. Simulations are conducted on only four benchmark functions, where competitive performance is reported. A mixed-integer-binary small-population PSO was proposed in 58

for solving a problem of optimal power flow. The constraint handling technique used in this algorithm is based on a strategy to generate and keep its four decision variables in feasible space through heuristic operators. In this way, the algorithm focuses its search procedure on the feasible solution space to obtain a better objective value. This technique improves the final solution quality as well as the convergence speed

58

. The micro-PSO has been developed for many applications such as motion estimation

40 , power system stabilizers design 43 , 45 , optimal design of static var compensator (SVC) damping controllers 44 , reactive power optimization 48 , short-term hydrothermal scheduling 50 , reconfiguration of shipboard power system 52 , and transient stability constrained optimal power flow 49 .

2.3 Micro-Differential Evolution Algorithms

The DE algorithm works based on the scaled difference between two individuals of a population set, where the scaling factor is called the mutation factor. Due to reliability and simplicity of the DE algorithm, it has been employed in many science and engineering areas such as, solving large capacitor placement problem 17 and synthesis of spaced antenna arrays 18 . Many works have put new schemes forward to enhance the DE algorithm such as, opposition-based differential evolution (ODE) 14 , enhanced differential evolution using center-based sampling 15 , and opposition-based adaptive differential evolution 16 . Some approaches toward reducing computational cost of DE-based algorithms by reducing the population size have been proposed as well, 8 -9 , 11 -13 . In order to increase the exploration ability of MDE algorithm and to prevent stagnation, an extra search move is incorporated into the MDE algorithm in 1 by perturbing it along the axes. A local search procedure is hybridized with the MDE algorithm in 3 to overcome high dimensional problems. However, the reported performance results are comparable with some other methods. As an application of MDE, a hybrid differential evolution (HDE) with population size of five is used for finding a global solution 60 . A gradually reducing population size method is proposed in 8 . This method is examined on 13 benchmark functions, where the results have demonstrated a higher robustness as well as efficiency compared to the parent DE 8 . In another approach 9 , small size cooperative sub-populations are employed to find sub-components of the original problem concurrently. During cooperation of sub-populations, the sub-components are combined to construct the complete solution of the problem. Performance evaluation of this method has been done on high-dimensional instances of five sample test problems with encouraging results reported in 9 . MDE is employed for evolving an indirect representation of the bin packing problem with acceptable performance 11 . The idea of self-adaptive population size was carried out to test absolute and relative encoding methods for DE 12 . The reported simulation results on 20 benchmark problems denote that, in terms of the average performance and stability, the self-adaptive population size using relative encoding outperforms the absolute encoding method and the standard DE algorithm 12 . The idea of micro-ODE was proposed and evaluated for an image thresholding case study 13 . Performance of the proposed method was compared with the Kittler algorithm and the MDE. The micro-ODE method has outperformed these algorithms on 16 challenging test images and has demonstrated faster convergence speed due to embedding the opposition-based population initialization scheme 13 . The smallest population size used in Fajfar_2012 is . This method tries to decrease the population size by using three different rules to select candidates for replacing the trial vector Fajfar_2012 .

It is worth mentioning that MDE is different from the compact DE (cDE) methods Mininno_2011 . In cDE methods, a statistical representation of population is used, where the memory requirement is similar to using four individuals in the population, regardless of the problem’s dimension Mininno_2011 , Brown_2015 . Since in this work we are focused on discussing small non-virtual populations, this class of DE algorithms is left for further investigation in other works.

Many methods have been proposed in the literature to increase robustness and reliability of DE algorithm through adaptive or self-adaptive approaches Neri_2010 , Neri_2010a , Mininno_2011 . This is particularly important for the hyper-parameters adjustment. The mutation factor is one of those parameters which generally is set to a constant value Segura_2015 . However, it has been shown that randomization of mutation factor can offer a potential new search moves and compensate the excessively deterministic search structure of a standard DE algorithm Das_2005 , 39 . Studies have used various distribution such as Gaussian, Log-normal, and Cauchy to generate random mutation factor. However, none of them is superior over the others Segura_2015 .

The methods proposed in Das_2005 , Das_2005a use random mutation factor at each generation to increase diversity of the population, which reportedly is effective for both noise and stationary problems Price_2005 . These methods use a standard population size whereas the mutation factor is randomly selected from the range such that its mean value is controlled to remain at 0.75. In Weber_2011 , four different mutation factor (scale-factor) schemes are proposed. The population size is set to . The study shows that none of the methods can show promising results for all problems, since the performance is dependent on the employed type of the distribution Weber_2011 . In Brest_2008 , a self-adapted DE algorithm for the mutation factor and crossover rate parameters is presented. The smallest population size used in the experiments is . A self-adaptive control mechanism is used in Brest_2009 to change the mutation factor and crossover rate during the generations. In this method, only the “rand/1/bin” mutation vector is used for a multi-population method with aging mechanism. The jDE method is one of the promising methods, as is generated with a specific ratio for each individual of a standard population size Brest_2006 . The idea of generating random mutation factor at the lowest level (for each individual of population and dimension of problem per generation) is proposed by authors in 39 . This technique is used to increase search performance of the standard MDE algorithms. This method is evaluated on a set of 28 benchmark functions for CEC-2013 competition, where the results show superior exploration performance. This algorithm, called MDEVM, is used as a measure to compare the performance of a new mutation factor, called current-by-rand-to-pbest, proposed in the JADE algorithm Brown_2015 . In this approach, which is a DE algorithm for unconstrained optimization problems, the smallest considered population size is Brown_2015 . In this method, the mutation factor and crossover rate are randomly generated at the beginning of each generation, where the mean of distribution is updated in each generation. The proposed mutation factor, called current-by-rand-to-pbest, in Brown_2015 is tested for both large and small population sizes on a set of 13 classical benchmark functions. The comparative results in 39 show competitive performance between MDEVM and JADE algorithms. The MDEVM algorithm is developed for 3-D localization of wireless sensors for real-world application in PIMRC .

2.4 Other Micro-Population-based Algorithms

Several other types of micro-population-based algorithms have been proposed in the literature. A cooperative micro-artificial bee colony (CMABC) approach for large-scale optimization was presented in 53 . This approach has combined the divide-and-conquer property of cooperative algorithms and low computational cost of micro-artificial bee colony (MABC) method. In case of employing micro-bacterial foraging optimization algorithms (-BFOA) for solving optimization problems, in 54 the best bacterium is kept unaltered, whereas the other population members are reinitialized. It is reported that this approach has outperformed the standard bacterial foraging optimization algorithm (BFOA) with a larger population size 54 . For the environmental economic dispatch case study, a chaotic micro bacterial foraging algorithm (CMBFA) with a time-varying chemotactic step size is proposed in 55 . It is reported that the convergence characteristic, speed, and solution quality of this method are better than the classical BFOA for a 3-unit system and the standard IEEE 30-bus test system. A micro-artificial immune system (Micro-AIS) with five individuals (antibodies), from which only 15 clones are obtained is proposed in 56 . In this approach, the diversity is preserved by considering two simple but fast mutation operators in a nominal convergence manner, that work together in a reinitialization process 56 . An other type of EAs, called elitistic evolution (EEv), is proposed for optimizing high-dimensional problems in 57 , which works without using complex mechanisms such as Hessian or covariance matrix. This approach utilizes adaptive and elitism behaviour, in which a single adaptive parameter controls the evolutionary operators to provide reasonable local and global search abilities 57 . An efficient scheduler for heterogeneous computing (HC) and grid environments, based on parallel micro-cross generational elitist selection, heterogeneous recombination, and cataclysmic mutation, called p-CHC is proposed in 59 . This method combines a parallel sub-populations model with a focused evolutionary search using a micro population and a randomized local search (LS) method. Performance comparisons of algorithms such as ant colony algorithm (ACO) and GA have demonstrated good scheduling in reduced execution times 59 .

3 Differential Evolution

Generally speaking, while solving a black-box problem to find optimal decision variables, an optimizer has no knowledge about the structure of the problem landscape to minimize/maximize an objective function. The DE algorithm, similar to other algorithms in its category, starts its search procedure with some uniform random initial vectors and tries to improve them in each generation toward an optimal solution. The population consists of vectors in generation , where is a -dimensional vector defined as . Generally a simple DE algorithm consists of the following three major operations: mutation, crossover, and selection.

Mutation: This step selects three vectors randomly from the population such that where and , for each vector , the mutant vector scheme “DE/Rand/1” is calculated as

(1)

where the factor is a real constant number, which controls the amplification of the added differential vector of . The exploration ability of DE increases by selecting higher values for . So far, four main mutation schemes are introduced 61 ,62 , summarized as

  • DE/Best/1:

    (2)
  • DE/Target-to-Best/1 (DE/T2B/1):

    (3)
  • DE/Rand/2:

    (4)
  • DE/Best/2:

    (5)

where is corresponding vector of the best objective value in the population.

Crossover: The crossover operation increases diversity of the population by shuffling the mutant and parent vector as follows:

(6)

where , is the dimension and is the crossover rate parameter, and generates a real random uniform number in the interval . Therefore, the trial vector can be generated as

(7)

Selection: The and vectors are evaluated and compared with respect to their fitness values; the one with better fitness value is selected for the next generation.

4 Proposed Diversity Enhancement via Vectorized Random Mutation

In our proposed algorithm, the population size is very small compared to the standard DE algorithm. Reducing the population size results a faster convergence rate with a higher risk of stagnation. However, by increasing the population diversity it is possible to decrease the stagnation risk 2 , 3 . In order to foster diversity, the mutation factor , as one of the most significant control parameters for the DE algorithm, can play a major role. The mutation factor in the DE algorithm is a constant mutation factor (CMF) generally set to 2 , 14 . This factor can also be selected randomly from the interval for each individual in the population vector, , 3 . Different versions of this scalar random mutation factor (SRMF) is proposed in literature for standard DE algorithm as discussed in the previous subsection. We call its micro version as the micro-differential evolution with scalar random mutation factor (MDESM) where the population size is very small as well. In the MDE algorithm, in order to increase the population diversity, we propose the idea of utilizing a vectorized random mutation vector (VRMF) for each individual in the population. This approach is called the MDEVM algorithm. Therefore, the mutation factor can be defined for each individual as

(8)

where , 3 . This interval is selected based on the experimenal results presented in the next section.

1:Procedure MDEVM
2: //Initial Population Generation
3:for  do
4:     for  do
5:          
6:     end for
7:     
8:end for//End of Initial Population Generation
9:while ( & do
10:     for  do //Mutation
11:          Select three random population vectors from where
12:          for  do
13:               
14:               
15:          end for//End of Mutation //Crossover
16:          for  do
17:               if  or  then
18:                    
19:               else
20:                    
21:               end if
22:          end for//End of Crossover //Selection
23:          if  then
24:               
25:          else
26:               
27:          end if//End of Selection
28:     end for
29:     ,
30:     
31:     
32:end while
Algorithm 1 Micro-Differential Evolution with Vectorized Mutation (MDEVM)
(a) Mutation vector distribution of vector
(b) Monte-Carlo simulation
Figure 1: Diversity of vector for a 2-D individual vector on a 2-D map for constant (), scalar random (), and vectorized random () mutation factors.
(a) Static
(b) Scalar Random
(c) Vectorized Random
Figure 2: Monte-Carlo simulation of population diveristy for and after 10,000 random generation by considering the crossover operator.
(a) Centroid Distance
(b) Pairwise Distance
Figure 3: Average centroid and pairwise distances for the Monte-Carlo simulation of population diversity for dimensions 1 to 1000 and after 10000 random generations by considering the mutation and crossover operators.

The enhancement made in this paper, compared to the original idea proposed in 39 , is parallel implementation of the MDEVM algorithm suitable for running on multicore central processing units (CPUs). In this implementation, the population is stored in the shared memory and a pool of workers (CPUs) is considered to conduct the processing. The procedure works in such a way that for each main step of the algorithm (Mutation, Crossover, Selection), the individuals are distributed over the CPU cores of the machines and reading/writing data is done in the shared memory. In this way, we could enhance the running time of the algorithm dramatically, particularly suitable for application on smart devices. The pseudocode of the proposed MDEVM approach is in Algorithm 1. After generation of initial population, the mutation vector is computed by using the proposed mutation factor, Eq. (8). Then, the crossover and mutation procedures are conducted similar to the DE algorithm to generate the next population. The termination criterion is met when the difference between best fitness value () and fitness value-to-reach () is less than fitness error-value-to-reach (), or the searching procedure exceeds the maximum number of function calls , i.e., .

4.1 Supporting Randomized Vectorized Mutation Factor By Monte-Carlo Simulations

In order to visualize exploration abilities among CMF, scalar random mutation factor (SRMF), and VRMF, possible diversities of a 2-D individual sample vector R is presented in Figure 1.a. In order to have a better sense of variable space, it is constructed with hexagons, where each hexagon represents a point on the variable space. The landscape for variables and is limited to boundaries and . Therefore, by having the sample vector R, denoted with a dashed hexagon, effect of an arbitrary CMF on R is denoted by , as a dotted dark hexagon. Therefore, diversity of the generated mutation vector is limited to one hexagon (i.e. the dotted dark hexagon) on the direction of vector R. In the case of having an identical uniform random for all variables of an individual, i.e. the SRMF scheme, the diversity of mutation vector is not just limited to one hexagon (i.e. the dotted dark hexagon), yet is along the vector R, denoted by grey hexagons. Conversely, by randomizing for each variable of each individual using a uniform random vector F, i.e. , the VRMF diversity covers the whole plane containing all the hexagons, which presents the highest exploration power.

The diversities of CMF, SRMF, and VRMF are investigated by employing Monte-Carlo simulation on an arbitrary landscape in Figure 1.b. In this simulation for arbitrary vector , 100 sample mutation vectors for each CMF, SRMF, and VRMF schemes with and are generated, where the variables are limited as . The simulation illustrates that the VRMF scheme supports a higher diversity than the SRMF, where its diversity is limited to the points on a line. Strictly speaking, if all variables in the individual vector R are multiplied by a random scalar number, other points are generated on the same direction of the line which is indicated by vector . In fact, the SRMF is generating points on the same direction as vector R

. If the relationship among the variables (variables’ interaction) are linear, the mutation vector is doing fine (which is a very exceptional case, especially during solving real-world problems). However, when the VRMF scheme is utilized, the mutation vector has no restriction to explore any point on the search space with no linearity restriction, which was the case for SRMF. This discussion is valid for higher dimensions, where the line needs to be replaced with a plane or hyperplane.

By taking into account the crossover component of MDE algorithm, another Monte-Carlo simulation is conducted for CMF, SRMF, and VRMF schemes as presented in Figure 2. This simulations are conducted using the “DE/Rand/1” mutation scheme for a population size of , and sample individuals are generated from an identical uniform random population, in a 2-D variables space, where each variable is uniform randomly selected as . The crossover plays a decisive role in taking diversity into the populations, as presented for the CMF scheme in Figure 2.a. However as presented in Figure 2.b and Figure 2.c, the crossover also expands the diversity of SRMF and VRMF schemes dramatically such that almost the whole variable space is explored by the VRMF scheme.

By keeping the stated Monte-Carlo simulation settings, the diversity analysis on CMF, SRMF, and VRMF schemes is extended for variable space dimensions and populations sizes as shown in Figure 3. In these simulations, the average distance from the centroid point and pairwise distance measures are considered. The average distance from centroid demonstrated distance of each individual from the centroid of the population. This measure shows how diverse is the population. The average pairwise distance measure presents the average of distances between individual pairs in a population. This measure demonstrates the diversity of population as well as how far individuals are spreaded on the landscape from each other.

The average distance of individuals from the centroid distances is computed as:

(9)

where the centroid of the population is , computed as

(10)

As Figure 3.a shows, the CMF has the least diversity for both and compared to SRMF and VRMF schemes. This is while the VRMF scheme has the highest diversity and as the dimensionality of problem increases, its diversity is improved more comparing to the CMF and SRMF schemes. It is obvious that the has a higher diversity than the in all schemes, but this diversity improvement is much less than the diversity that the VRMF scheme can deliver into the population with a much smaller population size, i.e. . The comparison among CMF with and CMF with and VRMF with clearly indicates that the performance of VRMF scheme with small population size is higher in term of diversity enhancement.

In order to study the diversity based on the average pairwise distance, it is computed as

(11)

The average pairwise distances for different dimensions and populations sizes and are illustrated in Figure 3.b. The simulation results for this diversity measurement criterion also clearly demonstrates strength of the VRMF with small populations size.

Parameter Description Value

Crossover Probability Constant

0.9
Maximum Number of Function Calls
Objective Function Error Value to Reach 1e-8
Number of Runs 30
Mutation Factor 0.9

Table 1: Parameter setting for all conducted experiments

5 Simulation Results

In this section, performance of the proposed MDEVM algorithm is compared with the MDE, MDESM, and the JADE Brown_2015 algorithms. The parameter setting and employed benchmark functions (i.e. CEC-2013 testbed 19 ) are described in the next subsection. Then, the comprehensive experimental series are presented in details. The algorithm is implemented in parallel using the multiprocessing library of Python programming language. The experiments are conducted on a cluster of 16 CPUs with 1TB of RAM.

In the next subsection, the benchmark functions and parameters setting are provided. Afterwards, a set of experiments and analysis regarding different mutation schemes and population sizes, problem dimensionalities, mutation factor ranges, population diversity, and higher number of function calls is presented.


MV
2 DE/Rand/1
DE/Best/1
DE/T2B/1

3
DE/Rand/1
DE/Best/1
DE/T2B/1

4
DE/Rand/1
DE/Best/1
DE/T2B/1
DE/Best/2

5
DE/Rand/1
DE/Best/1
DE/T2B/1
DE/Best/2
DE/Rand/2
Table 2: Mutant vector (MV) schemes for population sizes and .

MV MDE MDESM JADE
+ = - + = - + = -

2
DE/Rand/1 0 23 5 2 19 7 2 25 1
DE/Best/1 14 11 3 15 9 4 15 4 9
DE/T2B/1 25 3 0 17 9 2 13 10 5
3 DE/Rand/1 11 10 7 7 11 10 12 13 3
DE/Best/1 24 4 0 20 5 3 8 15 5
DE/T2B/1 17 9 2 12 10 6 6 15 7

4
DE/Rand/1 20 4 4 12 5 11 14 1 13
DE/Best/1 21 7 0 19 5 4 12 5 11
DE/T2B/1 20 4 4 17 1 10 15 1 12
DE/Best/2 2 3 23 0 4 24 0 3 25

5
DE/Rand/1 19 2 7 12 8 8 14 2 12
DE/Best/1 24 2 2 16 7 5 15 1 12
DE/T2B/1 19 2 7 15 4 9 18 3 7
DE/Best/2 12 6 10 6 7 15 10 2 16
DE/Rand/2 0 1 27 1 1 26 4 4 20

6
DE/Rand/1 13 5 10 13 7 8 10 4 14
DE/Best/1 21 3 4 18 6 4 14 3 11
DE/T2B/1 19 5 4 15 2 11 14 2 12
DE/Best/2 11 5 12 7 4 17 8 9 11
DE/Rand/2 1 1 26 1 2 25 1 8 19

50
DE/Rand/1 1 2 25 1 3 24 2 3 23
DE/Best/1 10 5 13 2 6 20 2 4 22
DE/T2B/1 0 3 25 0 4 24 0 4 24
DE/Best/2 0 4 24 0 3 25 0 2 26
DE/Rand/2 0 2 26 1 2 25 0 1 27

Table 3: Number of Wilcoxon rank-sum test comparisons for MDEVM against MDE, MDESM, and JADE schemes on CEC 2013 benchmark functions and population sizes for dimension and mutation vector (MV) schemes “DE/Rand/1”, “DE/Best/1”, “DE/T2B/1”, “DE/Best/2”, and “DE/Rand/2”. If the bolded value is under “+” column, the MDEVM method has the highest overall performance, otherwise, the corresponding method under the column header has the best overall performance.

5.1 Benchmark Functions and Parameters Setting

All the experiments are conducted on the CEC-2013 testbed 19 . It is comprised of benchmark functions and an improved version of CEC-2005 20 counterpart with additional test functions and modified formula in order to create the composite functions, oscillations, and symmetric-breaking transforms. This testbed is divided into three categories which are uni-modal functions (), multi-modal functions (), and composite functions () 19 . Parameters setting for all the experiments are presented in Table  1 adapted from the literature 3 , 14 , 19 , unless a change is mentioned. The reported values are averaged for independent runs per function per algorithm to minimize the effect of the stochastic nature of the algorithms on the reported results.

The mutation schemes presented by Eq.(1) to Eq.(5) are the five main schemes, which are used for in experiments 61 , 62 . For the small size and very small size populations, we are using the mutation schemes based on their structure for different sizes of population as demonstrated in Table 2. For the , we have proposed a “DE/Rand/1” mutation vector scheme as

(12)

where the only two available individuals in the population are used in it. We have used the “DE/Rand/1”, “DE/Best/1”, “DE/T2B/1”, and “DE/Best/2” schemes for .

5.2 Experimental Series 1: Mutation Schemes and Population Size Analysis

Performance of the MDE, MDESM, MDEVM, and JADE schemes are evaluated for mutation schemes in Table 2, population sizes , and dimension . The Wilcoxon test results are reported in terms of pair-wise comparison in Table 3. The symbols “+”, “=”, and “-” indicate a statistically better, equivalent, and worse performance, respectively, compared with the MDEVM algorithm 63 .

Figure 4: The better (+) performance counting for the MDEVM vs. MDE and MDEVM vs. MDESM comparison for different mutation schemes and populations sizes.
(a) Better (+) performance.
(b) Equal (=) performance.
(c) Worse (-) performance.
Figure 5: Average performance of MDEVM vs. MDE and MDEVM vs. MDESM methods for different mutation schemes and populations sizes.
Figure 6: Components of highest performance algorithm with respect to the best error for each benchmark function, and function families uni-modal (), multi-modal (), and composite ().

The results in Table 3 demonstrate that generally the MDEVM method the JADE method have competitive results. It is interesting that for very small population size (i.e. ), the MDEVM method has competitive or better performance than other methods. This is particularly obvious for the “DE/Best/1” and “DE/T2B/1” schemes with and “DE/Best/1” scheme for . Regarding , we see that the MDEVM method has more successful results for the “DE/Rand/1”, “DE/Best/1”, and “DE/T2B/1” schemes. However, as the diversity of population is increasing by adding more number of individuals to the population, the MDEVM method achieves less successful results. This situation is obvious for where a standard size of population is used and we see the JADE, MDE, and MDESM methods have much better performance. The results clearly show that for the , the VRMF technique in MDEVM can add a good diversity to the population which results in better performance that other methods. However, this diversity enhancement method has extra affect on large population sizes, in a way that the population has more than enough diversity and cannot converge to an optima, which is the stagnation situation. Since the “DE/Best/2” and “DE/Rand/2” schemes have more exploration capability due to incorporating more population individuals, using the VRMF technique results in extra diversity in the population. This is another additive diversity which stops the MDEVM method to converge to optimal solution(s). The difference between DE and MDE algorithms is in population size which delivers diversity into the population. Combining the VRMF technique with the DE algorithm consequences in extra diversity which results a poor performance of the algorithm. Using the standalone DE-algorithm may result a better performance, but by the cost of more number of function calls. Therefore, utilizing the MDE algorithm with small population sizes can deliver both higher diversity and performance into the algorithm. In overall, the “DE/Best/1”, “DE/Rand/1”, and “DE/T2B/1” schemes have the best performance among the various mutation schemes for MDEVM.

In Figure 4, a summary of better performance counting of all schemes is presented, where has the highest number of success for all mutation schemes on average. In order to have a closer look, average of better, equal, and worse performance counting for the MDEVM vs. MDE and MDEVM vs. MDESM comparisons are presented in Figure 5.

Regarding the average of better and equivalent performances results as shown in Figure 5.a and Figure 5.b, it is clear that the “DE/Best/1” scheme has the most number of successes. In terms of worse performance comparison, it is interesting that as the population size increases, the number of worse performance counts, particularly for the “DE/T2B/1”, “DE/Best/2”, and “DE/Rand/2” mutation schemes, increase dramatically.

(a)
(b)
(c)
Figure 7: Best value so far of the MDE, MDESM, and MDEVM schemes for the DE/Rand/1 and DE/Best/1 and . For bravity, some functions are selected.

The best error value for each benchmark function family is illustrated in Figure 6. The dash line separates the uni-modal, multi-modal, and composite, benchmark functions types. For the uni-modal and multi-modal functions, the VRMF method with the “DE/T2B/1” mutation scheme and has the best performance. For the composite functions, the SRMF method with the “DE/T2B/1” mutation scheme and has the best performance. In overall, the “DE/Best/1” mutation scheme with population size of is recommended as the well-performance scheme among the all. Further analysis are conducted on “DE/Best/1” scheme in deep, including the popular scheme “DE/Rand/1”.

In Figure 7, performance of the MDE, MDESM, and MDEVM methods for the “DE/Rand/1” and “DE/Best/1” mutation schemes and different number of function calls are presented. As an example for the in Figure 6(a), the MDEVM method with the “DE/Best/1” has converged faster and the MDEVM method with the “DE/Rand/1” is going to converge with a sharp slope. By assigning a higher number of possible function calls, this method can outperform the MDEVM method with the “DE/Best/1” scheme. The algorithms for different number of function calls are discussed further in the current section. Similar behaviour as above is obvious for , and . Such behaviour is due to the natural diversity in the “DE/Rand/1” scheme.


MV MDE MDESM JADE
+ = - + = - + = -

10
DE/Rand/1 23 3 2 12 13 3 13 3 12
DE/Best/1 26 0 2 24 3 1 14 4 10

30
DE/Rand/1 22 4 2 17 5 6 14 2 12
DE/Best/1 21 5 2 20 6 2 15 1 12

50
DE/Rand/1 19 2 7 12 8 8 11 4 13
DE/Best/1 24 2 2 16 7 5 10 5 13

100
DE/Rand/1 18 3 7 16 5 7 11 7 10
DE/Best/1 20 5 3 15 9 4 10 6 12


Table 4: Number of Wilcoxon rank-sum test comparisons for MDEVM vs. MDE, MDESM, and JADE methods on CEC 2013 benchmark functions and population size for dimension and mutation vector (MV) schemes “DE/Rand/1” and “DE/Best/1”. If the bolded value is under “+” column, the MDEVM method has the highest overall performance, otherwise, the corresponding method under the column header has the best overall performance.

5.3 Experimental Series 2: Dimensionality Effects

In this subsection, performance of the proposed MDEVM method is compared with the MDE, MDESM, and JADE methods for dimension and population size with “DE/Rand/1” and “DE/Best/1” mutation vector schemes regarding the best value so far value. By considering the MDEVM method as the reference algorithm, summary of the Wilcoxon test results are reported in terms of pair-wise comparisons in Table 4. The results clearly demonstrate that the proposed MDEVM method has outperformed the MDE and MDESM methods for different dimensions. The MDESM method shows a better performance than the MDE method, which is due to the SRMF diversity enhancement technique used in this scheme. Both “DE/Rand/1” and “DE/Best/1” mutation schemes have competitive performances over all dimensions and MDE schemes. As the dimensionality of problems increases, the JADE method provides competitive performance versus the MDEVM method. This shows that for high-dimensional problems, the adaptive method along with a small size of population can provide a good diversity. We can see the same situation with the MDEVM method where the diversification of the mutation factor for decision variables can help the small size population to increase its diversity, suitable for high-dimensional problems.


Method
MV MDESM MDEVM
+ = - + = -
MDESM DE/Rand/1 14 12 2 6 7 15
DE/Best/1 14 14 0 7 9 12
MDEVM DE/Rand/1 19 5 4 3 25 0
DE/Best/1 19 6 3 19 3 6
Table 5: Summary of performance results of the MDESM and MDEVM approaches with and , respectively, versus the MDESM and MDEVM with and , respectively. The second set of methods are denoted with index . and .

5.4 Experimental Series 3: Mutation Factor’s Range Analysis

The most common mutation factor in the literature is , selected from the recommended range , 62 . Recently, different values for and its range has been proposed, such as in 1 and in 3 . Therefore, some experiments are conducted in this subsection to analyse affect of mutation factor range, on the performance of the MDESM and MDEVM approaches. By considering for dimension

and mutation vector schemes “DE/Rand/1” and “DE/Best/1”, the best error, standard deviation, and Wilcoxon rank-sum test results by considering the MDESM and MDEVM algorithms as references are presented in Table 

5. The mutation factor ranges are considered as and for MDESM and MDEVM approaches. The approaches with the range and are denoted by index , which are MDESM and MDEVM. The and demonstrate performance of the MDESM and MDEVM methods versus the MDESM and MDEVM methods, respectively.

As demonstrated in Table 5, the MDESM method has almost better performance than the MDESM method. However, the MDEVM method has outperformed the MDESM method due to the delivered diversity by the VRMF approach into the MDEVM method. The results of comparing MDEVM with the MDESM and MDEVM methods demonstrate that selecting F in the interval has a better performance than the limited interval . The comparison between the MDEVM and MDEVM also shows almost equal performance. Overall, better performance of the MDEVM method is obvious, since the MDEVM method has diversity served from both VRMF and wider mutation factor range .

5.5 Experimental Series 4: Population’s Diversity Analysis

The VRMF method can empower the MDE algorithm to escape trapping in local optima and decrease the stagnation risk. In order to analyze the effect of randomization of mutation factor on the population diversity by considering the centroid diversity measure and performance of the MDE algorithm, the best-value-so-far and population diversity plots of the MDE, MDESM, and MDEVM methods are presented for composite functions to in Figure 8. The simulations are conducted for dimension , population size , and schemes “DE/Rand/1” and “DE/Best/1”. Conductive to have a better sense of analysis, the maximum number of function calls is considered .

(a) : DE/Rand/1 Scheme
(b) : DE/Rand/1 Scheme
(c) : DE/Rand/1 Scheme
(d) : DE/Rand/1 Scheme
(e) : DE/Rand/1 Scheme
(f) : DE/Rand/1 Scheme
(g) : DE/Best/1 Scheme
(h) : DE/Best/1 Scheme
(i) : DE/Best/1 Scheme
(j) : DE/Best/1 Scheme
(k) : DE/Best/1 Scheme
(l) : DE/Best/1 Scheme
Figure 8: Performance comparison and population centroid-based distance diversity analysis among the MDE, MDESM, and MDEVM schemes for the maximum number of function calls , dimension , population size , and DE/Rand/1 and DE/Best/1 mutation schemes.

The MDEVM method for the mutation scheme “DE/Rand/1” has the best performance for the function as shown in Figure 8a, denoted by “B”. The population diversities in Figure 8d and for the “DE/Best/1” mutation scheme in Figure 8j, clearly show that while the MDE and MDESM methods for both mutation schemes are stagnated, due to almost static large value of centroid diversity value, the MDEVM method for the “DE/Rand/1” has escaped from the stagnation denoted by region “A” while trying to converge in generations denoted by region “B”. When the diversity is high, and the performance of algorithm in finding the solution is almost static with respect to the best-value-so-far measure, the population is considered stagnated. For situation of trapping in a local minimum, the population is not divert and the diversity is low, while having a poor best-value-so-far performance.

For the case, the MDEVM method using the “DE/Rand/1” and “DE/Best/1” schemes has the best performance, as shown in Figure 8b and Figure 8h. The MDE algorithm is trapped in local minimum for both mutation schemes, while the MDESM method has better capability to escape from both stagnation and local optimum trapping, denoted by region “C” in Figure 8e and Figure 8k. The MDEVM has the best best-value-so-far for both mutation schemes. For the “DE/Rand/1” mutation scheme, the population’s diversity shows a similar convergence trend to the MDESM method, but has achieved a much better best-value-so-far at the beginning generations (i.e., exploration phase) and then trapped in the local minimum, as denoted in region “C” of Figure 8b. The same performance is obvious for the “DE/Best/1” mutation scheme as shown in Figure 8h, where in region “A” it is converged to a solution. The corresponding diversity measure is well-illustrated in Figure 8k. In region “A”, which is the exploration phase, the population’s diversity is decreased and it is converged, as shown in region “B”. In “D”, it has trapped but recovered fast to the same level as region “B”.

The exploring power of VRMF is well illustrated for the benchmark function as shown in Figure 8.c and Figure 8.i. In Figure 8.c, it is clear that the VRMF technique has escaped the DE-algorithm from stagnation (denoted by “A”) approximately at and with a sudden movement, as denoted by region “B”, it has reached a better performance than the other methods in region “C”. This is clearly shown in Figure 8f, that the MDEVM algorithm is rescued from stagnation (region “A”) and gradually converging as shown in regions “B” and “C”. This is while the MDE algorithm is completely trapped in a local minimum, since its best-value-so-far remains constant for all and the population diversity is extremely low for all generations, i.e. almost in Figure 8.f. The MDESM has tried to converge (part “D” of Figure 8f) to the solution as presented in part “E” of Figure 8.c. However, its exploration is stopped as shown in parts “E” and “E” of the Figure 8.c and Figure 8.f, respectively, and no further improvements are achieved. For the “DE/Best/1” mutation scheme, the MDE is trapped in a local minimum similar to the “DE/Best/1” mutation scheme, as shown in Figure 8.i and Figure 8.l. The MDESM has achieved better performance by converging its population toward a solution as denoted by regions “C” and “A” in Figure 8i and Figure 8.l, respectively. In further generations, although it has spent some time in generations denoted by region “B” in Figure 8l to find a better solution, but it has been trapped finally in a local minimum as illustrated in part “C” of the Figure 8l. The MDEVM has experienced the similar trend as the MDESM (regions “A”, “D”, and “E” for centroid diversity in Figure 8l), but with better performance from region “A” toward region “B” of Figure 8i.

The centroid-based diversity measure along the best-so-far-value analysis clearly have demonstrated performance of the MDE, MDESM, and MDEVM algorithms in stagnation and local optimum trapping scenarios. The results clearly indicate a successful performance of the VRM approach in delivering diversity into the population. Particularly that after some generations where the algorithm is trapped in local optimum or stagnated, it is rescued and moved toward better solutions, while the other algorithms could not survive.

6 Conclusion and Future Work

In evolutionary algorithms (EAs), population size is critical in term of providing diversity into searching procedure. Particularly in the differential evolution (DE), where correct selection of the mutation factor is also a crucial parameter in delivering diversity into the population. Normally larger population sizes provide higher diversity with higher computational cost, which can provide less chance of stagnation and premature convergence due to its high exploration capability. Also, DE can generate a limited number of mutant vectors by using a constant mutation factor. The DE-algorithm with small population size, MDE algorithm, convergence to a solution is faster than standard DE algorithm. Yet, the chance of stagnation and premature convergence increases too. To avoid such situations, diversity should be increased while keeping the convergence speed of algorithm high. The crossover technique is one of the method to inject diversify into the population, where in conjunction with a better mutation scheme it can provide a higher diversity and possible faster finding of solution.

In this paper, we have proposed an enhanced version of the micro-DE (MDE) algorithm based on the important capability of the mutation factor to provide diversity in the population, i.e. the micro-differential evolution using vectorized random mutation factor (MDEVM) algorithm. In this approach, in contrast to the standard MDE, the mutation factor is selected randomly for each decision variable of each individual in the population. In this case, the population can provide much higher diversity during the search process. In order to analyze the performance of the proposed MDEVM algorithm, we have conducted experiments for different schemes of the mutation factor. The results demonstrate that the proposed MDEVM method is capable of solving complex optimization problems with very small population size and has competitive performance with the JADE approach.

Since the population size of MDEVM is small, the proposed parallel version of MDEVM can be implemented such a way that can evaluate a group of individuals on one central processing unit (CPU). As an example for a population size of four, each individual can conduct processing on a core of a quad-core CPU machine, where most of todays’ smart devices are equipped with such processors. In order to design fast but reliable optimization algorithms to tackle with real-time applications, mostly in embedded systems, micro-algorithms can be one of the promising approaches. Particularly, implementation on field-programmable gate arrays (FPGAs) which can provide low-power consumption capabilities for certain applications. It is also interesting to investigate adaptive version of the MDEVM, where the randomness can be under control with different probability distribution through progress of the population. The compact versions of DE have a lot in common with the micro approaches. The same idea is worth to work on using small virtual populations for further research.

References

  • (1) F. Caraffini, F. Neri, and I. Poikolainen, “Micro-differential evolution with extra moves along the axes,” in Proc. IEEE Symposium on Differential Evolution, 2013, pp. 46-53.
  • (2) J . Lampinen and I. Zelinka, “On Stagnation of the Differential Evolution Algorithm,” in Proc. of 6th International Mendel Conference on Soft Computing, 2000, pp. 76-83.
  • (3) M. Olguin-Carbajal, E. Alba, and J. Arellano-verdejo, “Micro-Differential Evolution with Local Search for High Dimensional Problems,” in

    Proc. IEEE Congress on Evolutionary Computation

    , 2013, pp. 48-54.
  • (4) F. Viveros-Jiménez1, E. Mezura-Montes, and A. Gelbukh, “Empirical analysis of a micro-evolutionary algorithm for numerical optimization,” International Journal of Physical Sciences, vol. 7(8), pp. 1235-1258, 2012.
  • (5) K. Krishnakumar, “micro-genetic algorithms for stationary and non-stationary function optimization,” Intell. Control Adapt. Syst., vol. 1196, pp. 289-296, 1989.
  • (6) T. Huang and A. S. Mohan, “Micro-particle swarm optimizer for solving high dimensional optimization problems (μPSO for high dimensional optimization problems),” Appl. Math. Comput., vol. 181, no. 2, pp. 1148-1154, Oct. 2006.
  • (7) K. E. Parsopoulos, “Parallel cooperative micro-particle swarm optimization: A master–slave model,” Appl. Soft Comput., vol. 12, no. 11, pp. 3552-3579, Nov. 2012.
  • (8) J. Brest and M. Sepesy Maučec, “Population size reduction for the differential evolution algorithm,” Appl. Intell., vol. 29, no. 3, pp. 228-247, Sep. 2007.
  • (9) K. E. Parsopoulos, “Cooperative micro-differential evolution for high-dimensional problems,” in Proc. 11th Annual conference on Genetic and evolutionary computation, 2009.
  • (10) Choo Jun Tan, Chee Peng Lim, and Yu-N Cheah, “A Modified micro Genetic Algorithm for undertaking Multi-Objective Optimization Problems,” Journal of Intelligent and Fuzzy Systems, vol. 24, no. 3, pp.483-495, 2013.
  • (11) M. A. Sotelo-figueroa, H. José, P. Soberanes, J. M. Carpio, H. J. F. Huacuja, L. C. Reyes, J. Alberto, and S. Alcaraz, “Evolving Bin Packing Heuristic Using Micro-Differential Evolution with Indirect,” Recent Advances on Hybrid Intelligent Systems, 2013, pp. 349-359.
  • (12) N. S. Teng, J. Teo, and M. H. a. Hijazi, “Self-adaptive population sizing for a tune-free differential evolution,” Soft Comput., vol. 13, no. 7, pp. 709-724, Jul. 2009.
  • (13) S. Rahnamayan and H. R. Tizhoosh, “Image thresholding using micro opposition-based Differential Evolution (Micro-ODE),” in Proc. IEEE Congress on Evolutionary Computation, 2008, pp. 1409-1416.
  • (14) S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama, “Opposition-Based Differential Evolution,” IEEE Trans. Evol. Comput., vol. 12, no. 1, pp. 64-79, Feb. 2008.
  • (15) A. Esmailzadeh and S. Rahnamayan, “Enhanced Differential Evolution Using Center-Based Sampling,” in Proc. IEEE Congress on Evolutionary Computation, 2011, pp. 2641-2648.
  • (16) X. Zhang and S. Y. Yuen, “Opposition-based adaptive differential evolution,” in Proc. IEEE Congress on Evolutionary Computation, 2012, pp. 1-8.
  • (17) J.-P. Chiou, C.-F. Chang, and C.-T. Su, “Ant Direction Hybrid Differential Evolution for Solving Large Capacitor Placement Problems,” IEEE Trans. Power Syst., vol. 19, no. 4, pp. 1794-1800, Nov. 2004.
  • (18) D. G. Kurup, M. Himdi, and A. Rydberg, “Synthesis of uniform amplitude unequally spaced antenna arrays using the differential evolution algorithm,” IEEE Trans. Antennas Propag., vol. 51, no. 9, pp. 2210-2217, Sep. 2003.
  • (19) J. J. Liang, B-Y. Qu, P. N. Suganthan, and Alfredo G. Hernández-Díaz, “Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session and Competition on Real-Parameter Optimization,” Technical Report 201212, Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore, January 2013.
  • (20) P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y. P. Chen, A. Auger, and S. Tiwari, “Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization,” Nanyang Tech. Univ., Singapore and KanGAL, Kanpur Genetic Algorithms Lab., IIT, Kanpur, India, Tech. Rep., No.2005005, May 2005.
  • (21) D. Lahoz, B. Lacruz, and P. M. Mateo, “A multi-objective micro genetic ELM algorithm,” Neurocomputing vol. 111, pp. 90-103, 2013.
  • (22)

    F. Neri, V. Tirronen, “Recent Advances in Differential Evolution: A Review and Experimental Analysis,” Artificial Intelligence Review, Springer, Volume 33, Issue 1, pages 61-106, February 2010

  • (23) F. Neri, E. Mininno, “Memetic Compact Differential Evolution for Cartesian Robot Control,” IEEE Computational Intelligence Magazine, Volume 5, Issue 2, pages 54-65, May 2010
  • (24)

    Das S, Konar A (2005) An improved differential evolution scheme for noisy optimization problems. In: Pattern recognition and machine intelligence, vol 3776 of lecture notes in computer science. Springer, Berlin, pp 417-421

  • (25) Das S, Konar A, Chakraborty U (2005) Improved differential evolution algorithms for handling noisy optimization problems. In: Proceedings of the IEEE congress on evolutionary computation, vol 2, pp 1691-1698
  • (26) Price KV, Storn R, Lampinen J (2005) Differential evolution: a practical approach to global optimization, Springer, Berlin.
  • (27) M. Weber, F. Neri, V. Tirronen, “A Study on Scale Factor in Distributed Differential Evolution,” Information Sciences, Elsevier, Volume 181, Issue 12, pages 2488-2511, June 2011.
  • (28) E. Mininno, F. Neri, F. Cupertino, D. Naso, ?Compact Differential Evolution?, IEEE Transactions on Evolutionary Computation, Volume 15, Issue 1, pages 32-54, February 2011
  • (29) J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, “Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems,” IEEE Trans. Evol. Comput. 10 (6) (2006) 646-657.
  • (30) J. Brest, A. Zamuda, B. Boskovic, V. Zumer. High-Dimensional Real-Parameter Optimization using Self-Adaptive Differential Evolution Algorithm with Population Size Reduction. Conference Information: IEEE Congress on Evolutionary Computation, JUN 01-06, 2008 Hong Kong, PEOPLES R CHINA, 2008.
  • (31) Brest, Janez, Ales Zamuda, Borko Boskovic, Mirjam Sepesy Maucec, and Viljem Zumer. ”Dynamic optimization using Self-Adaptive Differential Evolution.” In IEEE congress on evolutionary computation, pp. 415-422. 2009.
  • (32) Brown, Craig, Yaochu Jin, Matthew Leach, and Martin Hodgson. ” JADE: adaptive differential evolution with a small population.” Soft Computing (2015): 1-10.
  • (33) Segura, Carlos, Carlos A. Coello Coello, and Alfredo G. Hern ndez-D az. ”Improving the vector generation strategy of Differential Evolution for large-scale optimization.” Information Sciences 323 (2015): 106-129.
  • (34) Fajfar, Iztok, Tadej Tuma, Janez Puhan, Jernej Olen ek, and rp d B?rmen. ”Towards Smaller Populations in Differential Evolution.” Electronic Components and Materials 42, no. 3 (2012): 152-163.
  • (35) C. A. C. Coello and G. T. Pulido, “Multiobjective optimization using a micro-genetic algorithm,” in Proc. Genetic Evolutionary Computation Conference, 2001, pp. 274-282.
  • (36) P. C. Ribas, L. Yamamoto, H. L. Polli, L. V. R. Arruda, and F. Neves-Jr, “A micro-genetic algorithm for multi-objective scheduling of a real world pipeline network,” Eng. Appl. Artif. Intell., vol. 26, no. 1, pp. 302–313, Jan. 2013.
  • (37) Y. G. Xu and G. R. Liu, “Detection of flaws in composites from scattered elastic-wave field using an improved GA and a local optimizer,” Comput. Methods Appl. Mechanics Eng., vol. 191, no. 36, pp. 3929-3946, 2002.
  • (38) D. E. Goldberg, “Sizing populations for serial and parallel genetic algorithms,” in Proc. 3rd International Conference on Genetic Algorithms, 1989, pp. 70–79.
  • (39) J. Tippayachai, W. Ongsakul, and I. Ngamroo, “Parallel micro genetic algorithm for constrained economic dispatch,” IEEE Transactions on Power Systems, vol.17, no.3, pp.790-797, Aug 2002.
  • (40) S. Tiwari, G. Fadel, and K. Deb, “AMGA2: improving the performance of the archive-based micro-genetic algorithm for multi-objective optimization,” Engineering Optimization, vol. 43, no. 4, pp. 377-401, 2011.
  • (41)

    D. Sanchez, P. Melin, O. Castillo, and F. Valdez, “Modular granular neural networks optimization with multi-objective hierarchical genetic algorithm for human recognition based on iris biometric,” in

    Proc. IEEE Congress on Evolutionary Computation, 2013, pp. 772-778.
  • (42)

    J. H. Ang, C. K. Goh, E. J. Teoh, and A. A. Mamun, “Multi-objective evolutionary Recurrent Neural Networks for system identification,” in

    Proc. IEEE Congress on Evolutionary Computation, 2007, pp. 1586-1592.
  • (43) K. Itoh, K. Miyata, and H. Igarashi, “Evolutional Design of Waveguide Slot Antenna With Dielectric Lenses,” IEEE Trans. Magn., vol. 48, no. 2, pp. 779-782, Feb. 2012.
  • (44) F. Neri, G. Iacca, and E. Mininno, “Compact Optimization,” Handbook of Optimization, Springer Berlin Heidelberg, 2013, pp. 337-364.
  • (45) A. Prugel-Bennett, “Benefits of a Population : Five Mechanisms That Advantage Population-Based Algorithms,” IEEE Trans. Evol., vol. 14, no. 4, pp. 500-517, 2010.
  • (46) H. Salehinejad, S. Rahnamayan, H. R. Tizhoosh, and S. Y. Chen, “Micro-Differential Evolution with Vectorized Random Mutation Factor,” in Proc. IEEE Congress on Evolutionary Computation, 2014.
  • (47) K. M. Bakwad, S. S. Pattnaik, B. S. Sohi, S. Devi, S. V. R. S. Gollapudi, C. V. Sagar, and P. K. Patra, “Fast motion estimation using small population-based modified parallel particle swarm optimisation,” Int. J. Parallel, Emergent Distrib. Syst., vol. 26, no. 6, pp. 457-476, Dec. 2011.
  • (48) J. C. F. Cabrera and C. A. C. Coello, “Handling Constraints in Particle Swarm Optimization Using a Small Population Size,” MICAI 2007: Advances in Artificial Intelligence, 2007, pp. 41-51.
  • (49) J. Carlos, F. Cabrera, and C. A. C. Coello, “Micro-MOPSO : A Multi-Objective Particle Swarm Optimizer That Uses a Very Small Population Size,” Multi-Objective Swarm Intelligent Systems, Springer Berlin Heidelberg, 2010, pp. 83-104.
  • (50) T. K. Das and G. K. Venayagamoorthy, “Optimal Design of Power System Stabilizers Using a Small Population Based PSO,” in Proc. IEEE Power Engineering Society General Meeting, 2006, pp. 1-7.
  • (51) T. K. Das, S. R. Jetti, and G. K. Venayagamoorthy, “Optimal Design of SVC Damping Controllers with Wide Area Measurements Using Small Population based PSO,” in Proc. International Joint Conference on Neural Networks, 2006, pp. 2255-2260.
  • (52) H. Salehinejad, R. Zadeh, R. Liscano, and S. Rahnamayan, ”3D localization in large-scale Wireless Sensor Networks: A micro-differential evolution approach,” Personal, Indoor, and Mobile Radio Communication (PIMRC), 2014 IEEE 25th Annual International Symposium on, pp. 1824-1828, 2014.
  • (53) T. K. Das, G. K. Venayagamoorthy, and U. O. Aliyu, “Bio-Inspired Algorithms for the Design of Multiple Optimal Power System Stabilizers: SPPSO and BFA,” IEEE Trans. Ind. Appl., vol. 44, no. 5, pp. 1445-1457, 2008.
  • (54) H. Salehinejad, S. Rahnamayan, and H. R Tizhoosh, ”Type-II opposition-based differential evolution,” Evolutionary Computation (CEC), 2014 IEEE Congress on, pp. 1768-1775, 2014.
  • (55) P. Mitra and G. Venayagamoorthy, “Empirical study of a hybrid algorithm based on clonal selection and small population based PSO,” in Proc. IEEE Swarm Intelligence Symposium, 2008, pp. 1-7.
  • (56) K. E. Parsopoulos, “Cooperative micro-particle swarm optimization,” in Proc. First ACM/SIGEVO Summit on Genetic and Evolutionary Computation, 2009, pp. 467-474.
  • (57) H. A. N. Wen-hua, “Improved MICROPSO Algorithm and Its Application on Reactive Power Optimization,” in Proc. Asia-Pacific Power and Energy Engineering Conference, 2012, pp. 1-4.
  • (58) S. Rahnamayan, J. Jesuthasan, F. Bourennani, H. Salehinejad, G. F. Naterer, ”Computing opposition by involving entire population,” Evolutionary Computation (CEC), 2014 IEEE Congress on, pp. 1800-1807, 2014.
  • (59) S. Rahnamayan, J. Jesuthasan, F. Bourennani, G. F. Naterer, H. Salehinejad, ”Centroid Opposition-Based Differential Evolution,” International Journal of Applied Metaheuristic Computing, pp. 1-25, 2014.
  • (60) D. Wu, D. Gan, and J. N. Jiang, “An improved micro-particle swarm optimization algorithm and its application in transient stability constrained optimal power flow,” Int. Trans. Electr. Energy Syst., vol. 24, no. 3, pp. 395-411, 2014.
  • (61) H. Salehinejad and S. Talebi, ”Dynamic fuzzy logic-ant colony system-based route selection system,” Applied Computational Intelligence and Soft Computing, 2010.
  • (62) J. Zhang, J. Wang, and C. Yue, “Small Population-Based Particle Swarm Optimization for Short-Term Hydrothermal Scheduling,” IEEE Trans. Power Syst., vol. 27, no. 1, pp. 142-152, Feb. 2012.
  • (63) T. Huang and A. S. Mohan, “A Novel Micro-Particle Swarm Optimizer for Solving High Dimensional Optimization Problems,” in Proc. IEEE Antennas and Propagation Society International Symposium, 2006, no. 1, pp. 3535-3538.
  • (64) C. Wang, Y. Liu, and Y. Zhao, “Application of dynamic neighborhood small population particle swarm optimization for reconfiguration of shipboard power system,” Eng. Appl. Artif. Intell., vol. 26, no. 4, pp. 1255-1262, Apr. 2013.
  • (65) A. Rajasekhar and S. Das, “Cooperative Micro Artificial Bee Colony Algorithm for Large Scale Global Optimization Problems,” Swarm, Evolutionary, and Memetic Computing, Springer Berlin Heidelberg, 2013, pp. 469-480.
  • (66) S. Dasgupta, A. Biswas, S. Das, B. K. Panigrahi, and A. Abraham, “A Micro-Bacterial Foraging Algorithm for High-Dimensional Optimization,” in Proc. IEEE Congress on Evolutionary Computation, 2009, pp. 785-792.
  • (67) N. Pandit, A. Tripathi, S. Tapaswi, and M. Pandit, “Static/Dynamic Environmental Economic Dispatch Employing Chaotic Micro Bacterial Foraging Algorithm,” Swarm, Evolutionary, and Memetic Computing, Springer Berlin Heidelberg, 2011, pp. 585-592.
  • (68) J. C. Herrera-lozada, H. Calvo, and H. Taud, “A Micro Artificial Immune System,” Polibits, vol. 43, pp. 107-111, 2011.
  • (69) F. Viveros-Jiménez, E. Mezura-Montes, and A. Gelbukh, “Elitistic Evolution: A Novel Micro-population Approach for Global Optimization Problems,” in Proc. 8th Mexican International Conference on Artificial Intelligence, 2009, pp. 15-20.
  • (70) V. H. Hinojosa and R. Araya, “Modeling a mixed-integer-binary small-population evolutionary particle swarm algorithm for solving the optimal power flow problem in electric power systems,” Appl. Soft Comput., vol. 13, no. 9, pp. 3839-3852, Sep. 2013.
  • (71) S. Nesmachnow, H. Cancela, and E. Alba, “A parallel micro evolutionary algorithm for heterogeneous computing and grid scheduling,” Appl. Soft Comput., vol. 12, no. 2, pp. 626-639, Feb. 2012.
  • (72) K.Y. Tsai and F.S. Wang, “Evolutionary optimization with data collocation for reverse engineering of biological networks,” Bioinformatics, vol. 21, no. 7, pp. 1180-1188, Apr. 2005.
  • (73) S. Das and P. N. Suganthan, “Differential Evolution : A Survey of the State-of-the-Art,” IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 4-31, 2011.
  • (74) K. Price, R. Storn, and J. Lampinen, Differential Evolution-A Practical Approach to Global Optimization, Berlin, Germany: Springer, 2005.
  • (75) F. Wilcoxon, “Individual comparisons by ranking methods,” Biometrics Bulletin, vol. 1, no. 6, pp. 80-83, 1945.
  • (76) D. D. Davendra and I. Zelinka, “GPU Based Enhanced Differential Evolution Algorithm : A Comparison between CUDA and OpenCL,” Handbook of Optimization, Springer Berlin Heidelberg, pp. 845-867.
  • (77) K. Tagawa, “Concurrent Implementation Techniques Using Differential Evolution for Multi-Core CPUs: A Comparative Study Using Statistical Tests,” Evolution, Complexity and Artificial Life, S. Cagnoni, M. Mirolli, and M. Villani, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014, pp. 261-280.
  • (78) L. de P. Veronese and R. a. Krohling, “Differential evolution algorithm on the GPU with C-CUDA,” in Proc. IEEE Congress on Evolutionary Computation, 2010, pp. 1-7.