1 Introduction
Many realworld optimization problems can be formulated as constrained optimization problems which have a set of constraints Floudas1990A , gen2000genetic . Without lose of generality, a constrained singleobjective optimization problem (CSOP) can be defined as follows:
(1) 
where is the objective function. is the decision vector. is the th component of . is the decision space, where and are the lower and the upper bounds of . denotes the th inequality constraint, and denotes the th equality constraint.
In order to evaluate the constraint violation of a solution , the overall constraint violation method is a widely used method which summaries all the constraints into a scalar value as follows:
(2) 
is an extremely small positive number, which is set to as suggested in Ref. wu2016problem . If , is a feasible solution, otherwise it is an infeasible solution.
As a kind of populationbased optimization algorithms, evolutionary algorithms (EAs) have attracted lots of interest in solving CSOPs. Because they have not any requirements for the objectives and constraints of CSOPs. To solve CSOPs, there are two basic components in constrained EAs. One is the singleobjective evolutionary algorithm (SOEA), and the other is the constrainthandling technique.
In terms of SOEAs, differential evolution (DE) is arguably one of the most powerful and versatile evolutionary optimizers in recent times 5601760 ; DAS20161 . There are two main reasons. The first reason is that the structure of DE is very simple. It is very easy to implement a DE algorithm by using any currently popular programming languages. The second reason is that the number of parameters in DE is few. It is very convenient for a novice to solve optimization problems by using DE 8315135 . In recent years, many different DE variants have been suggested, which include FADE liu2005fuzzy , jDE 4016057 , JADE 5208221 , CoDE 5688232 , SHADE tanabe2013success , LSHADE 6900380 , LSHADE44 7969504 and so on. In FADE liu2005fuzzy , fuzzy logic controllers are employed to adapt the search parameters for the mutation operation and crossover operation. In jDE 4016057 , a selfadaptive method is proposed to determine the values of the scale factor and the crossover rate . In JADE 5208221 , a currenttobest/1 with an optional external archive and a greedy mutation operator are proposed. It adaptively updates the control parameters and in each generation. CoDE 5688232 combines three trial vector generation strategies and three control parameter settings randomly to generate trial vectors. In SHADE tanabe2013success , an adaptive technique of parameter settings by using successful historic memories is proposed to generate trial vectors. LSHADE 6900380 is an improved version of SHADE, which reduces the population size linearly during the evolutionary process. As an variant of LSHADE, LSHADE44 7969504 proposes a strategy to select four different kinds of trial vector generation strategies adaptively.
The constrainthandling technique is the other key component in constrained EAs. Many constrainthandling methods have been proposed in evolutionary optimization MEZURAMONTES2011173 ; Coello:2017
. They can be generally classified into four different types, including penalty function methods, separation of objectives and constraints, multiobjective evolutionary algorithms (MOEAs) and hybrid methods
MEZURAMONTES2011173 ; Coello:2017 ; CoelloCoello20021245 .The penalty function method is a widely used method due to its simplicity in the constraint handling (Runarsson:2005jd ). It adopts a penalty factor to maintain a balance between minimizing the objectives and satisfying the constraints. A CSOP is converted into an unconstrained singleobjective optimization problem (SOP) by adding the overall constraint violation multiplied by a predefined penalty factor to the objective CoelloCoello20021245 . If , this penalty function method is called a death penalty approach bdack1991survey , which means that infeasible solutions are completely unacceptable. If is a static value during the evolutionary process, it is called a static penalty method homaifar1994constrained . If is changing during the evolutionary process, it is called a dynamic penalty method joines1994use . In the case in which is dynamically changing according to the information collected during the evolutionary process, it is called an adaptive penalty approach bean1993dual ; coit1996adaptive ; ben1997genetic ; 4799193 . However, given an arbitrary CSOP, the ideal penalty factors cannot be known in advance. In fact, the ideal penalty factors should be dynamic parameters.
In the separation of objectives and constraints methods, the objectives and constraints are compared separately. Compared with the penalty function methods, there is no need to tune the penalty factors. This type of constrainthandling method has a relatively high impact in evolutionary optimization in recent years. Representative examples include the superiority of feasible (SF) solutions DEB2000311 , constrainthandling method takahama2005constrained , stochastic ranking approach (SR) runarsson2000stochastic , and so on. In SF DEB2000311 , three basic rules are used to compare any two solutions. In rule 1, for two infeasible solutions, the one with less overall constraint violation is better. In rule 2, if one solution is feasible and the other is infeasible, the feasible one is preferred. In rule 3, for two feasible solutions, the one with a smaller objective value is better. In constrainthandling method, the relaxation of the constraints is controlled by the epsilon level , which can help to maintain a search balance between feasible and infeasible regions during the evolutionary process. In the constrainthandling method, if the overall constraint violation of a solution is less than , this solution is deemed feasible. Therefore, the epsilon level is a critical parameter. In the case of , constrainthandling method is the same as SF 996017 . Although constrainthandling is a very popular method, controlling the level properly is not at all trivial.
In SR runarsson2000stochastic
, a probability parameter
is employed to decide whether the comparison is based on objectives or constraints. For any two solutions, if a random number is less than or equal to , the one with the smaller objective value is better—i.e., the comparison is based on objectives. If the random number is greater than , the comparison is based on the overall constraint violation.In order to balance the constraints and the objectives, some researchers adopt multiobjective evolutionary algorithms (MOEAs) to deal with constraints MezuraMontes:2011cj . For example, the constraints of a CSOP can be converted into one or extra objectives. Then the CSOP is transformed into an unconstrained  or objective optimization problem, which can be solved by MOEAs. Representative examples include Cai and Wang’s Method (CW) cai2006multiobjective and the infeasibility driven evolutionary algorithms (IDEA) ray2009infeasibility .
In the type of hybrid constrainthandling methods, several constrainthandling mechanisms are hybrid to deal with constraints. For example, the adaptive tradeoff model (ATM) wang2008adaptive uses two different constrainthandling mechanisms, including a multiobjective approach and an adaptive penalty function method, in different evolutionary stages. In Ref. qu2011constrained , three different constrainthandling techniques, including constrainthandling takahama2005constrained , selfadaptive penalty functions (SP) 4799193 and SF 996017 , are employed to deal with constraints.
It can be concluded that most abovementioned constrainthandling methods have some limitations. The recently proposed push and pull search FAN2018 is a general framework to deal with constrained optimization. It has already been proved that PPS has many advantages in deal with constrained multiobjective optimization problems FAN2018 . In this paper, we try to investigate the performance of PPS in solving CSOPs. The PPS method is integrated into an adaptive DE framework to solve CSOPs. The contributions of this paper are summarized as follows:

The push and pull search technique and SF constrainthandling method are successfully embedded into an adaptive DE framework for constrained singleobjective optimization.

Two subpopulations, which use different constrainthandling mechanisms and trial vector generation strategies, are collaborated with each other efficiently to search for global optimal solutions.

The comprehensive experimental results indicate that the proposed PPSDE provides stateoftheart performance on the 28 CSOPs suggested in the CEC2018 competition on real parameter single objective optimization.
2 Related Work
In this section, some related work on adaptive DE algorithms are introduced. The proposed PPSDE employs an adaptive DE algorithm which is inspired from some adaptive DE variants, including CoDE 5688232 , CoDE 8315135 , LSHADE44+IDE 7969472 , UDE 7969446 , IUDE Anupam2018 and AGAPPS inbook . The description of each DE algorithm is given as follows:

CoDE — CoDE 5688232 randomly combines three trial vector generation strategies and three control parameter settings to generate trial vectors. In CoDE 5688232 , the strategy pool consists of DE/rand/1/bin, DE/rand/2/bin, and DE/currenttorand/1 trial vector generation strategies. The parameter pool consists of three different control parameter settings, including [], [] and []. When generating offsprings, three trial vectors are created by using the three trial vector generation strategies with randomly selected control parameter settings from the parameter pool. Then, the best trial vector is selected to update its parent. The experimental results demonstrated that the overall performance of CoDE is better than four other stateoftheart DE variants, i.e., JADE 5208221 , jDE 4016057 , SaDE 4632146 , and EPSDE MALLIPEDDI20111679 , and three nonDE variants, i.e., CLPSO 1637688 , CMAES CMAES2001 , and GL25 GARCIAMARTINEZ20081088 on 25 global numerical optimization problems used in the CEC2005 special session on realparameter optimization.

CoDE — CoDE 8315135 is an extension of CoDE 5688232 for solving CSOPs. It also adopts three different trial vector generation strategies, including DE/currenttorand/1, DE/currenttobest/1, and modified DE/randtobest/1, to balance diversity and convergence of a working population. In terms of constrainthandling, a new comparison rule, which combines the feasibility rule DEB2000311 with the constrained method takahama2005constrained , is proposed. When generating offsprings, three trial vectors are created by using the three trial vector generation strategies. Then, the best trial vector is selected by using the feasibility rule, and the selected trial vector is used to update its parent by using the constrained method. Moreover, a restart scheme is proposed to help the population jump out of a local optimal in the infeasible region for some extremely complex CSOPs.

LSHADE44 — LSHADE44 7969504 is an enhanced version of LSHADE 6900380 which was a first ranked algorithm at the CEC2014 competition on realParameter single objective optimization. In LSHADE44, four different trial vector generation strategies, including DE/currenttopbest/1/bin, DE/currenttopbest/1/exp, DE/rand1/1/bin, and DE/randr1/1/exp, are adopted to generate an offspring. The newly generated offspring updates its parent by using the feasibility rule DEB2000311 to deal with constraints.

LSHADE44+IDE — The search process of LSHADE44+IDE 7969472 is divided into two stages. In the first stage, the search of feasible individuals is carried out by minimization of the mean constraint violation. When a number of feasible individuals given a priori is found or the predefined portion of function evaluations (FES) is consumed, the search process is switched to the second stage. In the second stage, the function value is minimized until the stopping criteria is met. If a sufficient amount of feasible individuals is found in the first stage, the feasible solutions are adopted as an initial population for the second search stage, otherwise the individuals with the smallest mean constraint violation are used as an initial population for the second stage. An adaptive version of DE named LSHADE44 7744403 with reduction of the population size of four DE strategies is used in the first search stage. The adaptive DE variant with individualdependent technique 6913512 is employed in the second search stage.

UDE — UDE 7969446 is inspired from some popular DE variants, including CoDE 5688232 , JADE 5208221 , SaDE 4632146 , and rankingbased mutation operator 6423878 . It ranked second in the CEC 2017 competition on constrained real parameter optimization. In UDE, three trial vector generation strategies and two types of control parameter settings are combined. More specifically, UDE divides the working population into two subpopulations. In the top suppopulation, UDE used all the three trial vector generation strategies on each target vector, just like in CoDE 5688232 . In the bottom subpopulation, strategy adaptation is applied to select a trial vector generation strategy to generate a offspring. In the strategy adaptation, the three trial vector generation strategies are periodically selfadapted by learning from their experiences in generating promising solutions in the top subpopulation. In addition, a DE mutation strategy based on local search operation is adopted in UDE. A static penalty method is used to deal with constraints in UDE.

IUDE — IUDE Anupam2018 is an improved version of UDE 7969446 . It ranked first in the CEC2018 competition on real parameter single objective optimization. In IUDE, the constrainthandling method is a combination of constrainthandling technique takahama2005constrained and superiority of feasible solutions method DEB2000311 , while in UDE, only the static penalty method is adopted to deal with constraints. Furthermore, IUDE employs the parameter adaptation technique in LSHADE44 7969504 to generate offsprings, while UDE utilizes a control parameter pool to generate offsprings.

AGAPPS — AGAPPS inbook adopted an adaptive method to select recombination operators, including differential evolution (DE) operators and polynomial operators. Moreover, a push and pull search (PPS) method is employed to deal with constraints. The PPS has two search stages — the push stage and the pull stage. In the push stage, a CSOP is optimized without considering constraints. In the pull stage, the CSOP is optimized with an improved epsilon constrainthandling method. The experimental results show that AGAPPS is significantly better than other three DEs (LSHADE44+IDE, LSHADE44 and UDE) on the CEC 2017 competition on constrained real parameter optimization, which manifests that AGAPPS is a quite competitive algorithm for solving CSOPs.
3 Proposed Method
In this section, the proposed PPSDE algorithm is presented. The proposed PPSDE is a significantly enhanced version of AGAPPS inbook . The primary feature of the proposed PPSDE lies in strengthening the DE algorithm and the constrainthandling method. PPSDE is inspired from the following stateoftheart DE variants, including CoDE 5688232 , CoDE 8315135 , LSHADE44+IDE 7969472 , UDE 7969446 , IUDE Anupam2018 and AGAPPS inbook . PPSDE uses three different trial vector generation strategies, including modified DE/rand/1/bin, DE/currenttopbest/1, and DE/currenttorand/1, to generate three trial vectors. In PPSDE, the working population is divided into two subpopulations, including the top and the bottom subpopulations. In the top subpopulation, PPSDE employs all the three trial vector generation strategies on each target vector, just like in CoDE 5688232 and CoDE 8315135 . In the bottom subpopulation, an strategy adaptation, in which the trial vector generation strategies are periodically adapted by learning from their experiences in generating successful solutions in the top subpopulation, is employed to select a trial vector generation strategy to generate one trial vector. The constrainthandling in the top subpopulation is based on the PPS, and the bottom subpopulation adopts the feasibility rule DEB2000311 to deal with constraints. Furthermore, the control parameter settings adaptation strategy proposed in LSHADE44 7969504 is also used in the PPSDE algorithm. In the replacement process, the PPS is used to select individuals into the next generation.
3.1 Push and Pull Search
Push and pull search (PPS) is a general framework which is aim to solve constrained optimization problems FAN2018 . It first proposed to solve constrained multiobjective optimization problems (CMOPs), which is able to balance objective minimization and constraint satisfaction. The PPS divides the search process into two different stages. In the first stage, only the objectives are optimized, which means the working population is pushed toward the unconstrained global optimum without considering any constraints. In the pull stage, an improved epsilon constrainthandling approach is adopted to pull the working population to the constrained global optimum. In CMOPs, the influence of infeasible regions on Pareto fronts (PFs) can be classified into three different situations FAN2018 . In the first situation, infeasible regions block the way towards the PF. In the second situation, the unconstrained PF is covered by infeasible regions and all of it is infeasible. In the last situation, infeasible regions make the original unconstrained PF partially feasible.
In CSOPs, the influence of infeasible regions on the global optimum can be categorized into two different situations. In the first situation, infeasible regions block the way towards the global optimum, and the constrained global optimum is the same as its unconstrained global optimum. In the second situation, the unconstrained is covered by infeasible regions and the unconstrained global optimum is different to its constrained global optimum. For CSOPs, we also need to trade off objective minimization and constraint satisfaction. Therefore, it is quite natural to use PPS to solve CSOPs.
In the push search stage, a newly generated solution is retained into the next generation based on the objective value as described in Algorithm 1.
In the pull stage, infeasible solutions are pulled to the feasible regions by using the improved epsilon constrainthandling method. The details can be found in Ref. FAN2018 . A newly generated solution is selected for survival into the next generation based on the objective value, the overall constraint violation and the value of , as illustrated by Algorithm 2.
When solving CSOPs, the decision as to when to switch from the push to the pull search process is also very critical in the PPS. A strategy for when to switch the search behavior is suggested as follows.
(3) 
where represents the change rate of the minimal objective value during the last generations. and are the indexes of solutions with the minimum objective values in generation and , respectively. is a userdefined parameter. In this paper, we have set . is a very small positive number, which is used to make sure that the denominator in Eq. (3) is not equal to zero. In this paper, is set to . At the beginning of the search, is initialized to . At each generation, is updated according to Eq. (3). If is less than or equal to the predefined threshold , the search behavior is switched to the pull search.
3.2 Trial Vector Generation Strategy Adaptation
As discussed above, the working population of PPSDE is divided into two subpopulations, including the top subpopulation and the bottom subpopulation . The top subpopulation adopts three different trial vector generation strategies, including modified DE/rand/1/bin, DE/currenttopbest/1, and DE/currenttorand/1, to generate three trial vectors. The trial vector generation strategy with the best trial vector scores a win according to the PPS method. At each generation, the success rate of each trial vector generation strategy is calculated over the previous generations. For example, , and are the number of wins of trial vector generation strategy 1, 2, and 3 over the previous generations. The success rate of trial vector generation strategy is defined as .
In the bottom subpopulation , a trial vector generation strategy is selected according to its success rate which is calculated in the top subpopulation . Then, the selected trial vector generation strategy is employed to generate a trial vector. It is worth noting that each trial vector generation strategy has the same probability to be selected when the generation counter is less than .
3.3 Control Parameter Settings Adaptation
In PPSDE, the parameter adaptation principle of LSHADE44 7969504 is used in both subpopulations. In PPSDE, three trial vector generation strategies are employed. Each trial vector generation strategy needs to set two parameters, including the scale factor and the crossover rate . Three pairs of memories and for adaptation of and are employed in PPSDE. For each strategy, PPSDE stores successful values of parameters and into separate sets and during a generation. Then, all three pairs of memories and are adapted according to the values in sets and . At the beginning of each generation, all sets and are reset to empty sets . Each (There are three pointers ) is set to at the beginning of the search. If there is a change in a pair of memory, is increased by 1. If , where is the size of each historical memory, is reset to 1. At the beginning of the search, and are initialized to 0.5. After each generation and are calculated as follows.
(4) 
(5) 
(6) 
(7) 
(8) 
(9) 
where is objective function in the case of the old point is replaced by because was less or equal than . In the case of the old point is replaced by because was less or equal than , is overall constraint violation function .
The scale factor and the crossover rate are generated as follows. In the set , a rand number is selected first. Then,
is a random number from the Cauchy distribution with parameters
. is regenerated until it is bigger than 0. If , is set to one. is a random number from the Gauss distribution with parameters . is truncated into interval [0,1].3.4 Constraint Handling
In PPSDE, two different kinds of constraint handling methods are employed. They are PPS technique FAN2018 and the superiority of feasible (SF) solutions method DEB2000311 . More specifically, in the top subpopulation, the PPS technique is adopted as the constrainthandling method to select the best trial vectors. The SF constrainthandling method is used to sort each solution of a population in increasing order. Then the sorted population is divided into the top and the bottom subpopulations. In the replacement process, the PPS technique is also used to select solutions into the next generation.
3.5 The Framework of the Proposed Method
Algorithm 3 outlines the pseudocode of the proposed PPSDE algorithm. The generation counter and the population are initialized at line 1. At line 2, the initialized population is evaluated, and the number of consumed function evaluations is recorded. The number of wins of each trial vector generation strategy is initialized at line 3. Memories and which are used to set and are also initialized at line 3.
The algorithm repeats lines 424 until is greater than . At line 5, is calculated according to the PPS Method. The working population is divided into top and bottom subpopulations at line 6. Lines 711 show the process of generating offsprings in the top subpopulation. At line 12, the best offsprings are selected out from the newly generated solutions according to the PPS method. The number of wins corresponding to winning trial vector generation strategy is updated at line 13. The success rate of trial vector generation strategy is calculated at line 14. Lines 1519 show the process of generating offsprings in the bottom subpopulation. It is worth noting that only one trial vector is generated at each iteration. At line 20, the onetoone comparison is employed to select solutions to the next generation according to the PPS method. Three pairs of memories and for adaptation of and are updated at line 21. The best solution in the current generation is selected out at line 22. Finally, the generation counter is updated at line 23.
4 Experimental Study
4.1 Experimental settings
Seven stateofthe art constrained EAs, including AGAPPS inbook , LSHADE44 7969504 , LSHADE44+IDE 7969472 , UDE 7969446 , IUDE Anupam2018 , MAgES 8477950 and CoDE 8315135 , are employed to compare with the proposed PPSDE on the 28 benchmark problems with 10, 30 and 50dimensional decision variables provided in the CEC2018 competition on real parameter single objective optimization. Each algorithm runs for 25 times independently on the 28 test instances. The parameter settings of each algorithm are listed as follows:

Population size: , where is dimension of problem.

Size of top subpopulation .

Learning period generations.

DE/currenttobest/1 parameter: .

The max number of function evaluation:
Friedman aligned test is used to check whether the difference between the proposed PPSDE and the compared algorithms is statistically significant. The Friedman aligned test is carried out with a 0.05 significance level.
4.2 Discussion of Experiments
The mean values and the standard deviations of the objectives on the test instances C01  C28 with
= 10 achieved by eight algorithms in 25 independent runs are listed in Table 1. The Friedman aligned test indicates that PPSDE ranks the highest among the eight algorithms, as shown in the last row of Table 1. The pvalue computed through the statistics of the Friedman aligned test is 0, which strongly suggests the existence of significant differences among the eight tested algorithms. For C0106, C13, C16, C19, C25 and C28 with 10dimensional decision vectors, PPSDE can achieved the global optimal solutions steadily. In the 28 test instances, PPSDE has the best performance on 16 test problems among the eight tested algorithms, which indicates the superiority of the PPSDE.The statistic results of the objectives on the test instances C01  C28 with = 30 achieved by eight algorithms in 25 independent runs are listed in Table 2. On the 28 test instances, PPSDE has the best performance on 10 test problems among the eight tested algorithms. The Friedman aligned test also indicates that PPSDE ranks first among the eight algorithms, as shown in the last row of Table 2. The pvalue computed through the statistics of the Friedman aligned test is 0, which strongly suggests the existence of significant differences among the eight tested algorithms.
Table 3 shows the mean values and the standard deviations of the objectives on the test instances C01  C28 with = 50 achieved by eight algorithms in 25 independent runs. In the 28 test instances, PPSDE has the best performance on 10 test problems among the eight tested algorithms. The Friedman aligned test also indicates that PPSDE ranks the highest among the eight algorithms, as shown in the last row of Table 3. The pvalue computed through the statistics of the Friedman aligned test is 0, which strongly suggests the existence of significant differences among the eight tested algorithms.
From the above observation, it is clear that PPSDE is significantly better than the other seven algorithms on most of the 28 test instances. One possible reason is that, among the 28 test problems, there are many instances whose global optimal solutions are the same as those of their unconstrained counterparts. In PPSDE, the global optimal solutions can be achieved in the push stage without dealing with any constraints.
Test Instances  AGAPPS  LSHADE44+IDE  LSHADE44  UDE  IUDE  MAgES  CoDE  PPSDE  

C01  mean  0.00E+00  0.00E+00  0.00E+00  5.03E15  0.00E+00  1.65E30  0.00E+00  0.00E+00 
std  0.00E+00  0.00E+00  0.00E+00  5.28E15  0.00E+00  7.58E30  0.00E+00  0.00E+00  
C02  mean  1.01E30  0.00E+00  0.00E+00  6.44E15  0.00E+00  0.00E+00  0.00E+00  0.00E+00 
std  3.50E30  0.00E+00  0.00E+00  8.16E15  0.00E+00  0.00E+00  0.00E+00  0.00E+00  
C03  mean  7.58E+00  3.26E+05  3.15E+04  7.74E+01  3.54E+01  4.73E31  0.00E+00  0.00E+00 
std  2.63E+01  2.58E+05  3.70E+04  8.30E+00  3.77E+01  1.73E30  0.00E+00  0.00E+00  
C04  mean  1.63E+00  1.44E+01  1.36E+01  2.51E+01  2.90E+00  2.98E+01  1.36E+01  0.00E+00 
std  4.50E+00  1.15E+00  6.15E02  9.10E+00  5.95E+00  1.76E+01  2.74E07  0.00E+00  
C05  mean  0.00E+00  0.00E+00  0.00E+00  1.68E+00  1.74E30  0.00E+00  0.00E+00  0.00E+00 
std  0.00E+00  0.00E+00  0.00E+00  9.72E01  6.02E30  0.00E+00  0.00E+00  0.00E+00  
C06  mean  0.00E+00  8.08E+02  6.49E+02  8.71E+01  0.00E+00  3.58E+01  0.00E+00  0.00E+00 
std  0.00E+00  5.45E+02  2.84E+02  3.18E+01  0.00E+00  3.82E+01  0.00E+00  0.00E+00  
C07  mean  1.36E+02  3.40E+01  3.74E+00  6.46E+00  2.77E+02  3.17E+02  2.88E+02  3.57E+02 
std  6.86E+01  5.70E+01  6.96E+01  9.55E+01  1.10E+02  8.32E+01  9.25E+01  1.45E+02  
C08  mean  1.35E03  0.00E+00  1.35E03  1.34E03  1.35E03  1.35E03  1.35E03  1.35E03 
std  4.43E19  0.00E+00  2.21E19  8.33E06  0.00E+00  0.00E+00  4.05E13  0.00E+00  
C09  mean  4.98E03  0.00E+00  4.97E03  4.98E03  4.98E03  4.98E03  4.98E03  4.98E03 
std  2.66E18  0.00E+00  2.44E05  1.08E10  0.00E+00  0.00E+00  0.00E+00  0.00E+00  
C10  mean  5.10E04  0.00E+00  5.10E04  5.08E04  5.10E04  5.10E04  5.10E04  5.10E04 
std  2.21E19  0.00E+00  1.11E19  2.02E06  7.11E16  0.00E+00  3.50E13  1.78E15  
C11  mean  1.69E01  0.00E+00  1.69E01  6.00E+00  8.01E01  1.68E01  1.69E01  4.60E+01 
std  1.05E03  0.00E+00  2.83E17  1.00E04  1.76E06  5.13E03  1.40E09  1.59E+02  
C12  mean  3.99E+00  3.99E+00  3.99E+00  3.99E+00  3.99E+00  7.00E+00  3.99E+00  4.01E+00 
std  3.07E05  0.00E+00  2.32E03  6.00E06  1.23E03  7.03E+00  5.79E04  1.04E01  
C13  mean  1.60E01  0.00E+00  0.00E+00  1.11E+01  0.00E+00  1.59E01  0.00E+00  0.00E+00 
std  7.97E01  0.00E+00  0.00E+00  2.26E+01  0.00E+00  7.97E01  0.00E+00  0.00E+00  
C14  mean  2.38E+00  3.00E+00  2.88E+00  2.74E+00  2.38E+00  2.87E+00  2.38E+00  2.38E+00 
std  9.07E16  0.00E+00  2.04E01  3.22E01  1.36E15  7.63E01  1.36E15  1.36E15  
C15  mean  4.62E+00  1.13E+01  1.45E+01  6.75E+00  6.38E+00  7.61E+00  6.63E+00  6.13E+00 
std  1.93E+00  2.34E+00  3.77E+00  2.03E+00  4.11E+00  6.47E+00  3.83E+00  4.53E+00  
C16  mean  0.00E+00  4.04E+01  4.07E+01  6.28E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00 
std  0.00E+00  6.03E+00  6.72E+00  1.29E04  0.00E+00  0.00E+00  0.00E+00  0.00E+00  
C17  mean  6.42E01  1.00E+00  9.11E01  1.05E+00  1.99E02  7.35E01  NaN  2.74E01 
std  5.83E01  0.00E+00  1.77E01  1.34E01  4.48E02  3.22E01  NaN  4.45E01  
C18  mean  3.66E+01  3.17E+03  2.11E+03  2.36E+03  1.17E01  3.66E+01  NaN  3.72E+00 
std  1.26E05  2.41E+03  2.11E+03  1.76E+03  1.31E+01  1.65E05  NaN  8.37E+00  
C19  mean  0.00E+00  0.00E+00  1.45E06  2.72E03  0.00E+00  1.12E+00  NaN  0.00E+00 
std  0.00E+00  0.00E+00  3.12E07  6.66E03  0.00E+00  2.34E+00  NaN  0.00E+00  
C20  mean  5.76E01  4.16E01  1.93E01  1.65E+00  6.99E01  1.17E+00  4.74E01  5.83E01 
std  1.63E01  1.24E01  5.79E02  3.93E01  1.16E01  3.93E01  1.34E01  1.10E01  
C21  mean  4.92E+00  3.99E+00  3.99E+00  6.24E+00  3.99E+00  4.41E+00  4.41E+00  3.99E+00 
std  3.22E+00  0.00E+00  5.20E04  6.23E+00  4.12E05  2.12E+00  2.12E+00  4.99E03  
C22  mean  4.78E01  1.60E01  6.38E01  1.25E+01  3.14E+00  6.38E01  3.41E27  3.66E27 
std  1.32E+00  7.97E01  1.49E+00  2.47E+01  1.24E+01  1.49E+00  7.17E29  9.99E28  
C23  mean  2.42E+00  3.02E+00  2.54E+00  2.75E+00  2.38E+00  2.50E+00  2.38E+00  2.38E+00 
std  9.45E02  2.09E01  2.49E01  2.95E01  6.65E15  3.27E01  0.00E+00  8.84E16  
C24  mean  2.73E+00  8.77E+00  8.64E+00  6.00E+00  5.50E+00  6.13E+00  4.24E+00  4.99E+00 
std  1.04E+00  1.10E+00  9.07E01  1.18E+00  3.01E+00  2.22E+00  2.03E+00  4.12E+00  
C25  mean  0.00E+00  3.77E+01  3.87E+01  6.35E+00  0.00E+00  0.00E+00  0.00E+00  0.00E+00 
std  0.00E+00  7.31E+00  5.19E+00  3.14E01  0.00E+00  0.00E+00  0.00E+00  0.00E+00  
C26  mean  7.01E01  9.37E01  1.06E+00  1.02E+00  8.65E02  7.54E01  NaN  3.02E01 
std  4.71E01  3.72E01  3.23E01  5.17E02  1.84E01  3.25E01  NaN  4.41E01  
C27  mean  3.66E+01  7.94E+03  3.55E+03  6.72E+03  7.67E+01  7.59E+01  NaN  6.40E+01 
std  2.00E05  9.15E+03  4.81E+03  6.80E+03  4.75E+01  1.37E+02  NaN  6.03E+01  
C28  mean  6.30E+00  1.08E+01  1.97E+01  9.76E+00  4.57E+00  8.68E+00  NaN  0.00E+00 
std  6.55E+00  1.63E+01  1.15E+01  8.45E+00  6.63E+00  8.52E+00  NaN  0.00E+00  
Friedman Aligned Test  4.1964  5.8036  5.3750  5.9464  3.2321  4.6964  4.0536  2.6964 
Test Instances  AGAPPS  LSHADE44+IDE  LSHADE44  UDE  IUDE  MAgES  CoDE  PPSDE  

C01  mean  7.10E29  3.37E11  1.02E21  2.21E15  4.13E29  3.75E28  6.34E17  3.98E29 
std  4.23E29  4.11E11  4.87E21  7.08E15  2.26E29  7.20E29  5.25E17  2.42E29  
C02  mean  6.27E29  1.77E11  2.86E21  1.17E14  4.42E29  3.76E28  6.88E17  3.79E29 
std  4.24E29  2.52E11  9.27E21  3.65E14  2.58E29  7.26E29  5.96E17  2.32E29  
C03  mean  1.08E+03  1.13E+07  1.12E+06  8.59E+01  1.29E+02  6.73E28  NaN  7.98E+01 
std  4.16E+02  4.60E+06  1.95E+06  2.29E+01  2.95E+01  1.07E28  NaN  1.73E+01  
C04  mean  2.19E+01  1.39E+01  1.97E+01  8.45E+01  1.36E+01  7.03E+01  1.10E+02  1.09E+00 
std  3.87E+00  7.78E01  5.40E01  2.36E+01  1.45E06  3.11E+01  8.21E+00  3.76E+00  
C05  mean  6.47E28  1.30E16  4.25E03  7.22E+00  5.71E29  0.00E+00  4.27E07  5.01E29 
std  9.26E28  7.82E17  4.40E03  1.07E+00  9.91E29  0.00E+00  4.51E07  5.88E29  
C06  mean  4.09E+02  5.67E+03  3.96E+03  3.28E+02  4.29E+02  1.80E+02  NaN  0.00E+00 
std  5.59E+01  1.03E+03  7.22E+02  1.05E+02  9.01E+01  9.96E+01  NaN  0.00E+00  
C07  mean  2.21E+02  1.02E+01  5.55E+01  4.11E+02  3.27E+02  7.01E+02  NaN  2.45E+02 
std  6.65E+01  9.68E+01  1.08E+02  2.26E+02  1.14E+02  2.32E+02  NaN  1.43E+02  
C08  mean  2.84E04  2.40E04  2.80E04  2.40E04  2.80E04  2.84E04  2.06E04  2.51E02 
std  3.56E09  4.05E05  5.77E10  4.94E05  1.25E12  3.94E16  1.54E05  9.57E02  
C09  mean  2.67E03  2.67E03  2.67E03  2.67E03  2.67E03  2.67E03  2.66E03  2.67E03 
std  8.85E19  5.44E09  1.33E18  3.32E16  0.00E+00  0.00E+00  7.83E07  0.00E+00  
C10  mean  1.03E04  9.00E05  1.00E04  9.12E05  1.00E04  1.03E04  7.18E05  3.79E02 
std  4.25E09  8.64E06  4.76E10  1.79E05  5.06E15  0.00E+00  5.98E06  1.87E01  
C11  mean  3.04E+02  8.55E01  8.75E01  2.70E+01  7.75E+00  9.25E01  NaN  6.43E02 
std  3.06E+02  9.70E02  1.10E01  4.76E+00  7.36E+00  7.03E15  NaN  4.67E+00  
C12  mean  3.98E+00  6.07E+00  4.00E+00  1.57E+01  3.98E+00  4.61E+01  4.67E+00  4.98E+00 
std  4.26E04  2.84E+00  1.35E02  8.83E+00  1.07E04  2.97E+01  2.06E01  1.53E+00  
C13  mean  1.29E+01  3.27E+01  5.03E+01  9.64E+01  3.54E+00  2.89E27  1.59E+01  5.43E27 
std  3.02E+01  3.92E+01  1.36E+01  1.29E+02  1.61E+01  1.89E27  3.28E+01  6.01E27  
C14  mean  1.45E+00  1.93E+00  1.86E+00  1.59E+00  1.41E+00  1.63E+00  1.51E+00  1.41E+00 
std  6.09E02  4.66E02  4.47E02  1.93E01  1.05E15  9.17E02  3.93E02  9.06E16  
C15  mean  2.73E+00  1.29E+01  1.92E+01  9.27E+00  5.87E+00  6.75E+00  1.58E+01  6.38E+00 
std  1.38E+00  1.54E+00  3.61E+00  2.22E+00  3.55E+00  5.94E+00  3.57E+00  4.40E+00  
C16  mean  0.00E+00  1.56E+02  1.54E+02  8.92E+00  1.57E+00  0.00E+00  1.06E+01  0.00E+00 
std  0.00E+00  1.36E+01  1.53E+01  3.07E+00  5.12E07  0.00E+00  3.25E+00  0.00E+00  
C17  mean  1.21E+00  1.03E+00  1.00E+00  1.03E+00  1.83E01  9.72E01  NaN  4.57E01 
std  3.17E01  5.84E03  1.82E02  2.78E03  2.78E01  1.75E02  NaN  4.23E01  
C18  mean  3.66E+01  7.54E+03  9.13E+03  9.84E+03  1.81E+02  3.65E+01  NaN  7.07E+01 
std  1.39E01  5.26E+03  6.63E+03  3.78E+03  4.83E+01  1.39E01  NaN  5.73E+01  
C19  mean  0.00E+00  1.28E03  1.08E03  1.97E+00  0.00E+00  7.60E+00  NaN  0.00E+00 
std  0.00E+00  3.99E04  9.40E04  3.52E+00  0.00E+00  9.08E+00  NaN  0.00E+00  
C20  mean  4.38E+00  2.92E+00  3.55E+00  4.00E+00  3.89E+00  7.66E+00  2.98E+00  3.65E+00 
std  6.63E01  3.15E01  2.21E01  1.06E+00  2.83E01  1.24E+00  5.51E01  2.82E01  
C21  mean  9.37E+00  2.77E+01  2.28E+01  1.25E+01  1.56E+01  4.84E+01  1.17E+01  1.99E+01 
std  6.49E+00  9.19E+00  8.99E+00  8.47E+00  1.09E+01  1.58E+01  4.20E+00  9.73E+00  
C22  mean  1.84E+02  1.18E+03  3.24E+03  2.21E+02  1.96E+01  2.47E25  NaN  1.59E01 
std  2.09E+02  2.02E+03  3.17E+03  1.82E+02  3.50E+01  2.97E26  NaN  7.97E01  
C23  mean  1.43E+00  1.91E+00  1.86E+00  1.50E+00  1.43E+00  1.65E+00  1.78E+00  1.42E+00 
std  4.48E02  5.50E02  6.09E02  1.17E01  3.79E02  8.73E02  2.20E01  3.25E02  
C24  mean  3.36E+00  1.42E+01  1.22E+01  9.27E+00  2.48E+00  9.14E+00  1.27E+01  3.24E+00 
std  1.50E+00  1.37E+00  1.04E+00  1.28E+00  6.28E01  3.92E+00  3.33E+00  2.65E+00  
C25  mean  1.83E+01  1.48E+02  1.47E+02  1.59E+01  8.73E+00  0.00E+00  3.04E+01  4.40E+00 
std  7.25E+00  1.39E+01  1.27E+01  3.64E+00  4.95E+00  0.00E+00  1.09E+01  3.42E+00  
C26  mean  9.05E01  1.03E+00  1.00E+00  1.03E+00  6.83E01  9.78E01  NaN  7.94E01 
std  1.92E01  1.80E03  2.11E02  5.13E03  2.45E01  1.77E02  NaN  2.84E01  
C27  mean  3.71E+01  4.16E+04  3.19E+04  3.07E+04  2.79E+02  3.66E+01  NaN  1.90E+02 
std  1.83E+00  2.00E+04  1.13E+04  1.34E+04  5.92E+01  1.93E01  NaN  6.29E+01  
C28  mean  4.94E+01  1.55E+02  1.51E+02  6.50E+01  7.62E+01  5.84E+01  NaN  6.34E+00 
std  2.17E+01  1.91E+01  2.04E+01  1.93E+01  2.95E+01  2.33E+01  NaN  5.66E+00  
Friedman Aligned Test  3.3036  6.1071  5.6607  5.0714  3.1429  3.6071  6.0357  3.0714 
Test Instances  AGAPPS  LSHADE44+IDE  LSHADE44  UDE  IUDE  MAgES  CoDE  PPSDE  

C01  mean  6.76E25  1.21E03  9.80E19  6.77E04  7.68E28  2.87E27  1.79E05  8.73E28 
std  8.43E25  7.58E04  1.88E18  9.77E04  6.72E28  3.96E28  1.56E05  8.41E28  
C02  mean  1.01E24  8.25E04  2.70E17  2.89E04  7.92E28  2.89E27  2.16E05  8.79E28 
std  3.71E24  7.00E04  7.75E17  3.30E04  6.41E28  3.63E28  1.74E05  6.76E28  
C03  mean  5.44E+03  4.14E+07  3.54E+06  3.41E+02  5.65E+02  4.06E27  NaN  1.04E+02 
std  1.40E+03  1.36E+07  5.08E+06  1.15E+02  1.93E+02  3.58E28  NaN  4.20E+01  
C04  mean  1.40E+02  1.40E+01  1.48E+02  1.61E+02  6.45E+01  1.19E+02  3.42E+02  3.00E+01 
std  2.67E+01  9.87E01  7.43E+00  2.80E+01  2.20E+01  2.80E+01  1.36E+01  3.47E+01  
C05  mean  1.29E19  4.31E09  2.11E+01  3.19E+01  2.85E28  0.00E+00  2.30E+01  3.96E28 
std  3.98E19  1.05E08  3.80E01  3.21E+00  1.77E28  0.00E+00  8.91E01  2.94E28  
C06  mean  8.16E+02  8.99E+03  7.41E+03  6.56E+02  8.59E+02  2.87E+02  NaN  1.59E+01 
std  8.44E+01  1.06E+03  1.20E+03  2.25E+02  1.25E+02  1.31E+02  NaN  7.00E+01  
C07  mean  1.89E+02  3.65E+01  3.94E+01  6.73E+02  1.96E+02  1.37E+03  6.30E+77  1.16E+02 
std  9.48E+01  1.21E+02  1.61E+02  2.44E+02  2.15E+02  3.40E+02  3.15E+78  1.98E+02  
C08  mean  1.17E04  2.96E04  1.30E04  1.62E03  1.30E04  1.35E04  NaN  1.32E04 
std  3.21E05  7.59E05  2.33E07  7.90E04  4.28E06  1.23E16  NaN  5.99E06  
C09  mean  2.04E03  1.56E03  2.04E03  2.04E03  2.04E03  6.66E01  1.45E03  1.34E02 
std  1.94E09  2.35E04  1.33E18  5.84E11  0.00E+00  1.87E+00  1.13E04  3.41E02  
C10  mean  4.75E05  9.36E05  4.82E05  6.06E05  4.83E05  4.83E05  6.31E04  4.83E05 
std  1.33E06  3.77E05  8.10E08  4.70E05  1.73E11  1.83E09  9.70E05  1.95E08  
C11  mean  2.59E+03  7.30E01  1.19E+00  9.48E+01  1.15E+03  3.70E+00  6.31E04  4.84E+02 
std  3.64E+02  3.30E+00  2.44E+00  4.66E+01  1.16E+03  8.29E+00  9.70E05  9.62E+02  
C12  mean  6.63E+00  7.36E+00  5.20E+01  1.25E+01  5.96E+00  5.06E+01  6.78E+00  5.44E+00 
std  4.07E+00  2.86E+00  2.09E+01  5.86E+00  1.51E+00  2.05E+01  5.25E01  1.98E+00  
C13  mean  6.34E+01  9.14E+01  6.50E+02  1.37E+03  1.98E+01  2.95E+02  NaN  3.46E26 
std  5.42E+01  2.49E+01  1.02E+02  4.17E+02  4.05E+01  4.44E+02  NaN  2.37E26  
C14  mean  1.17E+00  1.49E+00  1.41E+00  1.29E+00  1.10E+00  1.34E+00  1.46E+00  1.10E+00 
std  8.75E02  2.97E02  2.96E02  9.74E02  6.80E16  3.74E02  6.62E02  6.80E16  
C15  mean  5.25E+00  1.45E+01  1.78E+01  1.17E+01  6.00E+00  1.45E+01  NaN  6.63E+00 
std  1.26E+00  1.65E+00  3.00E+00  1.43E+00  5.10E+00  1.01E+01  NaN  4.96E+00  
C16  mean  6.28E02  2.72E+02  2.72E+02  1.26E+01  6.28E+00  0.00E+00  5.47E+01  0.00E+00 
std  3.14E01  1.77E+01  1.84E+01  7.25E15  2.70E05  0.00E+00  1.50E+01  0.00E+00  
C17  mean  1.01E+00  1.05E+00  1.04E+00  1.05E+00  6.09E01  1.03E+00  NaN  1.30E+00 
std  2.75E01  5.86E04  5.57E03  1.56E03  2.27E01  6.12E03  NaN  4.25E01  
C18  mean  3.66E+01  2.00E+04  2.05E+04  3.40E+04  3.73E+02  3.66E+01  NaN  2.27E+02 
std  3.74E01  6.83E+03  7.21E+03  9.62E+03  3.77E+01  5.93E01  NaN  9.00E+01  
C19  mean  0.00E+00  3.54E02  6.66E02  6.42E+00  7.06E01  1.25E+01  NaN  0.00E+00 
std  0.00E+00  1.93E02  3.89E02  7.26E+00  2.45E+00  9.73E+00  NaN  0.00E+00  
C20  mean  1.03E+01  5.63E+00  8.12E+00  7.85E+00  8.68E+00  1.52E+01  1.30E+01  8.47E+00 
std  6.01E01  2.93E01  2.99E01  1.64E+00  3.99E01  5.51E01  3.67E01  3.16E01  
C21  mean  6.62E+00  6.28E+01  6.53E+01  7.64E+00  8.73E+00  5.53E+01  4.45E+01  1.30E+01 
std  3.77E+00  1.43E+00  2.04E+00  4.22E+00  5.30E+00  1.68E+01  1.29E+01  7.58E+00  
C22  mean  4.12E+03  1.13E+04  1.45E+04  4.09E+03  5.39E+02  9.76E+02  NaN  1.89E+01 
std  6.42E+03  6.03E+03  7.73E+03  3.05E+03  5.01E+02  6.06E+02  NaN  2.73E+01  
C23  mean  1.15E+00  1.44E+00  1.42E+00  1.26E+00  1.11E+00  1.34E+00  1.57E+00  1.11E+00 
std  3.99E02  2.98E02  3.13E02  7.70E02  1.74E02  4.52E02  2.56E02  1.74E02  
C24  mean  5.50E+00  1.56E+01  1.43E+01  1.14E+01  4.24E+00  1.88E+00  1.82E+01  2.48E+00 
std  9.07E01  1.57E+00  1.28E+00  1.38E+00  3.01E+00  1.49E+01  1.92E+00  6.28E01  
C25  mean  5.30E+01  2.65E+02  2.53E+02  2.34E+01  6.85E+00  0.00E+00  1.17E+02  1.60E+01 
std  1.67E+01  2.00E+01  1.69E+01  7.59E+00  1.75E+00  0.00E+00  3.40E+01  5.89E+00  
C26  mean  9.91E01  1.05E+00  1.04E+00  1.05E+00  9.82E01  1.03E+00  NaN  9.98E01 
std  1.50E01  3.46E03  3.29E03  3.76E03  5.46E02  7.26E03  NaN  1.28E01  
C27  mean  4.07E+01  7.60E+04  8.40E+04  1.09E+05  4.95E+02  3.65E+01  NaN  4.02E+02 
std  1.93E+01  2.03E+04  2.88E+04  1.88E+04  5.22E+01  5.16E06  NaN  7.99E+01  
C28  mean  1.39E+02  2.74E+02  2.67E+02  1.33E+02  1.84E+02  9.53E+01  NaN  2.50E+01 
std  3.65E+01  1.88E+01  1.78E+01  2.19E+01  2.69E+01  4.43E+01  NaN  9.59E+00  
Friedman Aligned Test  3.3214  6.1429  5.7321  5.125  2.9464  3.5714  6.4643  2.6964 
5 Conclusion
This paper extended the PPS framework to solve CSOPs. More specifically, the proposed PPSDE integrated PPS technique and an adaptive DE algorithm to deal with CSOPs. Three trial vector generation strategies — DE /rand/1, DE/currenttorand/1, and DE/currenttopbest/1 are used in the proposed PPSDE. In PPSDE, two subpopulations are employed to collaborated with each other to search for global optimal solutions. The top subpopulation adopts the PPS technique to deal with constraints, while the bottom subpopulation use the SF technique to deal with constraints. In the top subpopulation, all the three trial vector generation strategies are used to generate offsprings. In the bottom subpopulation, a strategy adaptation, in which the trial vector generation strategies are periodically selfadapted by learning from their experiences in generating promising solutions in the top subpopulation, is employed to choose a suitable trial vector generation strategy in each generation. Furthermore, the parameter adaptation principle of LSHADE44 is employed in both suppopulations in the proposed PPSDE. In the push stage of PPSDE, a CSOP is optimized without considering any constraints, which can help PPSDE to cross infeasible regions in front of the global optimum. In the pull stage of PPSDE, the CSOP is optimized with an improved epsilon constrainthandling method. The comprehensive experiments indicate that the proposed PPSDE achieves significantly better results than the other seven constrained DEs on most of the benchmark problems provided in the CEC2018 competition on real parameter single objective optimization.
It is also worthwhile to point out that PPS technique is not only a powerful constrainthandling method but also a general search framework which is focus on solving optimization problems with constraints. Obviously, a lot of work need to be done to improve the performance of PPSDE, such as, the enhanced constrainthandling mechanisms in the pull stage, the enhanced strategies to switch the search behavior, and the machine learning approaches integrated in the PPS framework. For another future work, the proposed PPS will be applied to solve constrained optimization problems with more than three objectives, i.e., constrained manyobjective optimization problems, to further verify the effect of PPS. Some realworld optimization problems will also be used to test the performance of the PPS embedded in different DE variants.
Acknowledgement
This research work was supported by the Key Lab of Digital Signal and Image Processing of Guangdong Province, the National Natural Science Foundation of China under Grant (61175073, 61300159, 61332002, 51375287), the Natural Science Foundation of Jiangsu Province of China under grant SBK2018022017, China Postdoctoral Science Foundation under grant 2015M571751, and Project of International, as well as Hongkong, Macao&Taiwan Science and Technology Cooperation Innovation Platform in Universities in Guangdong Province (2015KGJH2014).
Reference
References
 (1) C. A. Floudas, P. M. Pardalos, A collection of test problems for constrained global optimization algorithms, Vol. 455, Springer Science & Business Media, 1990.

(2)
M. Gen, R. Cheng, Genetic algorithms and engineering optimization, Vol. 7, John Wiley & Sons, 2000.
 (3) G. Wu, R. Mallipeddi, P. Suganthan, Problem definitions and evaluation criteria for the cec 2017 competition on constrained realparameter optimization, National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report.

(4)
S. Das, P. N. Suganthan, Differential Evolution: A survey of the stateoftheart, IEEE Transactions on Evolutionary Computation 15 (1) (2011) 4–31.
doi:10.1109/TEVC.2010.2059031. 
(5)
S. Das, S. S. Mullick, P. Suganthan,
Recent
advances in differential evolution – an updated survey, Swarm and
Evolutionary Computation 27 (2016) 1 – 30.
doi:https://doi.org/10.1016/j.swevo.2016.01.004.
URL http://www.sciencedirect.com/science/article/pii/S2210650216000146  (6) B. Wang, H. Li, J. Li, Y. Wang, Composite differential evolution for constrained evolutionary optimization, IEEE Transactions on Systems, Man, and Cybernetics: Systems (2018) 1–14doi:10.1109/TSMC.2018.2807785.
 (7) J. Liu, J. Lampinen, A fuzzy adaptive differential evolution algorithm, Soft Computing 9 (6) (2005) 448–462.
 (8) J. Brest, S. Greiner, B. Boskovic, M. Mernik, V. Zumer, Selfadapting control parameters in differential evolution: A comparative study on numerical benchmark problems, IEEE Transactions on Evolutionary Computation 10 (6) (2006) 646–657. doi:10.1109/TEVC.2006.872133.
 (9) J. Zhang, A. C. Sanderson, Jade: Adaptive differential evolution with optional external archive, IEEE Transactions on Evolutionary Computation 13 (5) (2009) 945–958. doi:10.1109/TEVC.2009.2014613.
 (10) Y. Wang, Z. Cai, Q. Zhang, Differential evolution with composite trial vector generation strategies and control parameters, IEEE Transactions on Evolutionary Computation 15 (1) (2011) 55–66. doi:10.1109/TEVC.2010.2087271.
 (11) R. Tanabe, A. Fukunaga, Successhistory based parameter adaptation for differential evolution, in: Evolutionary Computation (CEC), 2013 IEEE Congress on, IEEE, 2013, pp. 71–78.
 (12) R. Tanabe, A. S. Fukunaga, Improving the search performance of shade using linear population size reduction, in: 2014 IEEE Congress on Evolutionary Computation (CEC), 2014, pp. 1658–1665. doi:10.1109/CEC.2014.6900380.
 (13) R. Poláková, Lshade with competing strategies applied to constrained optimization, in: 2017 IEEE Congress on Evolutionary Computation (CEC), 2017, pp. 1683–1689. doi:10.1109/CEC.2017.7969504.

(14)
E. MezuraMontes, C. A. C. Coello,
Constrainthandling
in natureinspired numerical optimization: Past, present and future, Swarm
and Evolutionary Computation 1 (4) (2011) 173 – 194.
doi:https://doi.org/10.1016/j.swevo.2011.10.001.
URL http://www.sciencedirect.com/science/article/pii/S2210650211000538 
(15)
C. A. C. Coello,
Constrainthandling
techniques used with evolutionary algorithms, in: Proceedings of the Genetic
and Evolutionary Computation Conference Companion, GECCO ’17, ACM, New York,
NY, USA, 2017, pp. 675–701.
doi:10.1145/3067695.3067704.
URL http://doi.acm.org/10.1145/3067695.3067704  (16) C. A. C. Coello, Theoretical and numerical constrainthandling techniques used with evolutionary algorithms: a survey of the state of the art, Computer methods in applied mechanics and engineering 191 (1112) (2002) 1245–1287.
 (17) T. P. Runarsson, X. Yao, Search Biases in Constrained Evolutionary Optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews 35 (2) (2005) 233–243.
 (18) T. BDack, F. Hoffmeister, H. Schwefel, A survey of evolution strategies, in: Proceedings of the 4th international conference on genetic algorithms, 1991, pp. 2–9.
 (19) A. Homaifar, C. X. Qi, S. H. Lai, Constrained optimization via genetic algorithms, Simulation 62 (4) (1994) 242–253.
 (20) J. A. Joines, C. R. Houck, On the use of nonstationary penalty functions to solve nonlinear constrained optimization problems with ga’s, in: Evolutionary Computation, 1994. IEEE World Congress on Computational Intelligence., Proceedings of the First IEEE Conference on, IEEE, 1994, pp. 579–584.
 (21) J. C. Bean, A. ben HadjAlouane, A dual genetic algorithm for bounded integer programs, 1993.
 (22) D. W. Coit, A. E. Smith, D. M. Tate, Adaptive penalty methods for genetic optimization of constrained combinatorial problems, INFORMS Journal on Computing 8 (2) (1996) 173–182.
 (23) A. Ben HadjAlouane, J. C. Bean, A genetic algorithm for the multiplechoice integer program, Operations research 45 (1) (1997) 92–101.
 (24) Y. G. Woldesenbet, G. G. Yen, B. G. Tessema, Constraint handling in multiobjective evolutionary optimization, IEEE Transactions on Evolutionary Computation 13 (3) (2009) 514–525. doi:10.1109/TEVC.2008.2009032.
 (25) K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2) (2000) 311 – 338. doi:10.1016/S00457825(99)003898.

(26)
T. Takahama, S. Sakai, Constrained optimization by
constrained particle swarm optimizer with
level control, in: Soft Computing as Transdisciplinary Science and Technology, Springer, 2005, pp. 1019–1029.  (27) T. P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on evolutionary computation 4 (3) (2000) 284–294.
 (28) K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGAII, IEEE Transactions on Evolutionary Computation 6 (2) (2002) 182–197. doi:10.1109/4235.996017.
 (29) E. MezuraMontes, C. A. Coello Coello, Constrainthandling in natureinspired numerical optimization: Past, present and future, Swarm and Evolutionary Computation 1 (4) (2011) 173–194.
 (30) Z. Cai, Y. Wang, A multiobjective optimizationbased evolutionary algorithm for constrained optimization, Evolutionary Computation, IEEE Transactions on 10 (6) (2006) 658–675.
 (31) T. Ray, H. K. Singh, A. Isaacs, W. Smith, Infeasibility driven evolutionary algorithm for constrained optimization, in: Constrainthandling in evolutionary optimization, Springer, 2009, pp. 145–165.
 (32) Y. Wang, Z. Cai, Y. Zhou, W. Zeng, An adaptive tradeoff model for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 12 (1) (2008) 80–92.
 (33) B. Y. Qu, P. N. Suganthan, Constrained multiobjective optimization algorithm with an ensemble of constraint handling methods, Engineering Optimization 43 (4) (2011) 403–416.

(34)
Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman,
Push
and pull search for solving constrained multiobjective optimization
problems, Swarm and Evolutionary Computationdoi:https://doi.org/10.1016/j.swevo.2018.08.017.
URL http://www.sciencedirect.com/science/article/pii/S2210650218300233  (35) J. Tvrdík, R. Poláková, A simple framework for constrained problems with application of lshade44 and ide, in: 2017 IEEE Congress on Evolutionary Computation (CEC), 2017, pp. 1436–1443. doi:10.1109/CEC.2017.7969472.
 (36) A. Trivedi, K. Sanyal, P. Verma, D. Srinivasan, A unified differential evolution algorithm for constrained optimization problems, in: 2017 IEEE Congress on Evolutionary Computation (CEC), 2017, pp. 1231–1238. doi:10.1109/CEC.2017.7969446.
 (37) D. S. Anupam Trivedi, N. Biswas, An improved unified differential evolution algorithm for constrained optimization problems, http://web.mysites.ntu.edu.sg/epnsugan/PublicSite/Shared%20Documents/CEC2018/Constrained/Improved_Unified_Differential_Evolution_CEC_2018_Report.pdf.
 (38) Z. Fan, Z. Wang, Y. Fang, W. Li, Y. Yuan, X. Bian, Adaptive Recombination Operator Selection in Push and Pull Search for Solving Constrained SingleObjective Optimization Problems: 13th International Conference, BICTA 2018, Beijing, China, November 2–4, 2018, Proceedings, Part I, 2018, pp. 355–367. doi:10.1007/9789811328268_31.
 (39) A. K. Qin, V. L. Huang, P. N. Suganthan, Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE Transactions on Evolutionary Computation 13 (2) (2009) 398–417. doi:10.1109/TEVC.2008.927706.

(40)
R. Mallipeddi, P. Suganthan, Q. Pan, M. Tasgetiren,
Differential
evolution algorithm with ensemble of parameters and mutation strategies
, Applied Soft Computing 11 (2) (2011) 1679 – 1696, the Impact of Soft Computing for the Progress of Artificial Intelligence.
doi:https://doi.org/10.1016/j.asoc.2010.04.024.
URL http://www.sciencedirect.com/science/article/pii/S1568494610001043  (41) J. J. Liang, A. K. Qin, P. N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Transactions on Evolutionary Computation 10 (3) (2006) 281–295. doi:10.1109/TEVC.2005.857610.

(42)
N. Hansen, A. Ostermeier,
Completely derandomized
selfadaptation in evolution strategies, Evolutionary Computation 9 (2)
(2001) 159–195.
arXiv:https://doi.org/10.1162/106365601750190398, doi:10.1162/106365601750190398.
URL https://doi.org/10.1162/106365601750190398 
(43)
C. GarcíaMartínez, M. Lozano, F. Herrera, D. Molina, A. Sánchez,
Global
and local realcoded genetic algorithms based on parentcentric crossover
operators, European Journal of Operational Research 185 (3) (2008) 1088 –
1113.
doi:https://doi.org/10.1016/j.ejor.2006.06.043.
URL http://www.sciencedirect.com/science/article/pii/S0377221706006308  (44) R. Poláková, J. Tvrdík, P. Bujok, Lshade with competing strategies applied to cec2015 learningbased test suite, in: 2016 IEEE Congress on Evolutionary Computation (CEC), 2016, pp. 4790–4796. doi:10.1109/CEC.2016.7744403.
 (45) L. Tang, Y. Dong, J. Liu, Differential evolution with an individualdependent mechanism, IEEE Transactions on Evolutionary Computation 19 (4) (2015) 560–574. doi:10.1109/TEVC.2014.2360890.
 (46) W. Gong, Z. Cai, Differential evolution with rankingbased mutation operators, IEEE Transactions on Cybernetics 43 (6) (2013) 2066–2081. doi:10.1109/TCYB.2013.2239988.
 (47) M. Hellwig, H. Beyer, A matrix adaptation evolution strategy for constrained realparameter optimization, in: 2018 IEEE Congress on Evolutionary Computation (CEC), 2018, pp. 1–8. doi:10.1109/CEC.2018.8477950.
Comments
There are no comments yet.