1 Introduction
Many realworld optimization problems can be formulated as constrained multiobjective optimization problems (CMOPs), which can be defined as follows kalyanmoy2001multi :
(1) 
where is an
dimensional objective vector, and
. is an inequality constraint, and is the number of inequality constraints. is an equality constraint, and represents the number of equality constraints. is an dimensional decision vector.In order to solve CMOPs with equality constraints, the equality constrains are often transformed into inequality constraints by using an extremely small positive number as follows:
(2) 
To deal with a set of constraints in CMOPs, an overall constraint violation is employed as follows:
(3) 
Given a solution , if , is a feasible solution, otherwise it is infeasible. All feasible solutions form a feasible solution set.
To deal with constraints, Deb et al.996017 defined a  principle (CDP), that is, if a solution is said to constraineddominate a solution , one of the following conditions must be met:

Solution is a feasible solution and solution is an infeasible solution.

Both solution and solution are infeasible solutions, and the overall constraint violation of solution is smaller than that of solution .

Both solution and solution are feasible solutions, and solution dominates solution in terms of objectives.
A feasible solution set is constituted by all feasible solutions. Given a solution , if there is no any other solution satisfying , is called a Pareto optimal solution. All Pareto optimal solutions constitute a Pareto set (PS). The set of the mapping vectors of PS in the objective space is called a Pareto front (PF), which is defined as .
Over the past decade, a lot of research has been done in the field of multiobjective evolutionary algorithms (MOEAs) cai2018constrained cai2018decomposition . However, few works have been done in solving CMOPs. Recently, several constrained multiobjective evolutionary algorithms (CMOEAs) Fan2016Angle fan2017improved wang2019utilizing liu2019handling have been proposed to solve CMOPs. CMOEAs are particularly suitable for solving CMOPs, because they can find a number of Pareto optimal solutions in a single run, and are not affected by the mathematical properties of the objective functions. Therefore, the use of evolutionary algorithms to solve multiobjective optimization problems 7927724 cai2015external has become a research hotspot in recent years.
To better balance minimizing the objectives and satisfying the constraints for CMOPs, many constrainthandling mechanisms have been designed, such as penalty function method 5586543 , CDP deb2000efficient , stochastic ranking runarsson2000stochastic , constrained takahama2010constrained , and multiobjective concepts wang2007multiobjective . For example, the penalty function methods transform constrained optimization problems into an unconstrained optimization problem by adding constraints multiplied by penalty factors to the objectives. If penalty factors remain constant throughout the optimization process for a period of time, it is called a static penalty method homaifar1994constrained . If penalty factors are constructed as a function of the number of iterations or the iteration time, it is called a dynamic penalty approach joines1994use . If penalty factors change according to the feedback information wang2017improving during the search process, it is called an adaptive penalty approach woldesenbet2009constraint .
Deb proposed a constrainthandling method called CDP deb2000efficient , in which the fitness of a feasible solution is always better than that of an infeasible solution. Subsequently, CDP was extended to differential evolution (DE) by MezuraMontes et al. mezura2006modified to select target vectors and trial vectors. Moreover, CDP was used for designing parameter control in DE for constrained optimization mezura2009parameter .
In order to overcome the weakness of penalty constrainthandling methods, stochastic ranking was proposed runarsson2000stochastic
, which uses a bubblesortlike process to deal with the constrained optimization problems. Stochastic ranking uses a probability parameter
to determine whether the comparison is based on the objectives or based on the constraints. In the case of , stochastic ranking is similar to the behavior of the feasibility rules. Furthermore, it can couple with various algorithms. For example, stochastic ranking has been combined with DE fan2009improved and ant colony optimization leguizamon2007boundary .In this paper, we propose a new approach, namely PPSM2M, to solve CMOPs, which combines a M2M approach Liu2014Decomposition with push and pull search (PPS) framework fan2018push . Unlike other constrainthandling mechanisms, the PPSM2M decomposes a CMOP into a number of simple constrained multiobjective optimization subproblems in the initial phase. Each subproblem corresponds to a suppopulation. During the search process, each subpopulation evolves in a collaborative manner to ensure the diversity of the population. Inspired by the idea of information feedback model, some information about the constrained landscape is collected to help the parameters setting in the constrainthandling mechanisms.
In addition, the PPSM2M divides the search process into two different phases. In the first phase, each subpopulation approaches as close as possible to the unconstrained PF without considering any constraints. In the second phase, each subpopulation is pulled back to the constrained PF using some constrainthandling approaches. The pull search stage is divided into two parts: (1) Only an improved epsilon constrainthandling mechanism is used fan2017improved to optimize each subproblem for the first 90% generations; (2) In the last 10% of the generations, all subpopulations are merged into one population. Then the dominance laumanns2002combining and an improved epsilon constrainthandling mechanism fan2017improved work together to evolve the population. In summary, the proposed PPSM2M has the following advantages.

At the beginning of the search, the method decomposes the population into a set of subpopulations, and each subpopulation searches for a different multiobjective subproblem in a coordinated manner. In other words, the PFs of all these subproblems constitute the PF of a CMOP. So the computational complexity is reduced by limiting the operator to focus on each subpopulation, and the convergence and diversity of the population are effectively ensured.

When dealing with constraints, the method follows a procedure of ”ignore the constraints first and consider the constraints second”, so that infeasible regions encountered a distance before the true PF present literally no extra barriers for the working population.

Since the landscape of constraints has been probed in the unconstrained push searching stage, this information can be employed to guide the parameter settings for mechanisms of constraint handling in the pull search stage.
The remainder of this paper is organized as follows. Section 2 introduces the general idea of PPS framework and the M2M decomposition approach. Section 3 gives an instantiation of PPSM2M. Section 4 designs a set of experiments to compare the proposed PPSM2M with the other six CMOEAs, including M2M Liu2014Decomposition , MOEA/DEpsilon Yang:2014vt , MOEA/DSR jan2013study , MOEA/DCDP jan2013study , CMOEA/D asafuddoula2012adaptive and NSGAIICDP 996017 . Finally, conclusions are drawn in section 5.
2 Related Work
In this section, we introduce the general idea of PPS framework and the M2M population decomposition approach, which are used in the rest of this paper.
2.1 The general idea of PPS framework
The push and pull search (PPS) framework was introduced by Fan et al. fan2018push . Unlike other constraint handling mechanisms, the search process of PPS is divided into two different stages: the push search and the pull search, and follows a procedure of ”push first and pull second”, by which the working population is pushed toward the unconstrained PF without considering any constraints in the push stage, and a constraint handling mechanism is used to pull the working population to the constrained PF in the pull stage.
In order to convert from the push search stage to the pull search stage, the following condition is applied
(4) 
where is a threshold, which is defined by users. In Eq. (4), we set in this work. During the last generations, is the rate of change of the ideal point according to Eq. (5), and is the rate of change of the nadir point according to Eq. (6).
(5) 
(6) 
where are the ideal and nadir points in the th generation. , are the ideal and nadir points in the th generation. and are two points in the interval . is a very small positive number, which is used to make sure that the denominators in Eq.(5) and Eq.(6) are not equal to zero. In this paper, is set to . When is less than , the push search stage is completed and the pull search stage is ready to start.
The major advantages of the PPS framework include:

In the push stage, the working population conveniently gets across infeasible regions without considering any constraints, which voids the impact of infeasible regions encountered a distance before the true PF.

Since the landscape of constraints has already been estimated in the push search stage, valuable information can be collected to guide the parameter settings in the pull search stage, which not only facilitates the parameter settings of the algorithm, but also enhances its adaptability when dealing with CMOPs.
2.2 The M2M population decomposition approach
Employing a number of subpopulations to solve problems in a collaborative way rizk2018novel is a widely used approach, which can help an algorithm balance its convergence and diversity. One of the most popular methods is the M2M population decomposition approach Liu2014Decomposition , which decomposes a multiobjective optimization problems into a number of simple multiobjective optimization subproblems in the initialization, then solves these subproblems simultaneously in a coordinated manner. For this purpose, unit vectors in are chosen in the first octant of the objective space. Then is divided into subregions , where is
(7) 
where is the acute angle between and . Therefore, the population is decomposed into subpopulations, each subpopulation searches for a different multiobjective subproblem. Subproblem is defined as:
(8) 
Altogether there are subproblems, and each subproblem is solved by employing a subpopulation. Moreover, each subpopulation has individuals. In order to keep individuals for each subproblem, some selection strategies are used. If subpopulation has less than individuals, then individuals from (the entire population) are randomly selected and added to .
A major advantage of M2M is that it can effectively balance diversity and convergence at each generation by decomposing a multiobjective optimization problem into multiple simple multiobjective optimization problems.
3 An Instantiation of PPSM2M
This section describes the details of an instantiation, which combines the M2M approach with the PPS framework in a nondominated sorting framework to solve CMOPs.
3.1 The M2M approach
In the initial stage, PPSM2M uses a decomposition strategy to decompose a CMOP into a set of subproblems that are solved in a collaborative manner with each subproblem corresponding to a subpopulation. For the sake of simplicity, we assume that all the objective functions of a CMOP are nonnegative. Otherwise, the objective function is replaced by , where is the minimum value of the objective function found so far.
Assuming that the objective function space is divided into subregions, direction vectors
are uniformly distributed on the first octant of the unit sphere, where
is the center of the th subregion. Then the objective function space is divided into nonadjacent subregions , where the th subregions can be obtained by the Eq.(8). Through such a decomposition method, the multiobjective optimization problem (Eq. (1)) can be decomposed into simple CMOPs. The procedure is described in Algorithm 1. As an example when the objective number and the number of subregions , the population decomposition is illustrated in Fig. 1, where are six evenly distributed direction vectors.3.2 The PPS framework
The search process of the PPS framework is divided into two main search stages: the push search stage and the pull search stage. In the push search stage, each subpopulation uses an unconstrained NSGAII to search for nondominated solutions without considering any constraints. When using unconstrained NSGAII to solve simple multiobjective optimization problems, an individual is measured by two attributes 996017 , including the nondomination rank and the crowding distance . Since individuals have two attributes, when an individual is compared to another individual , if one of the following two requirements is satisfied, then individual can enter the descendant population.

If the individual dominates the individual , i.e. ;

If the individual and the individual are nondominated by each other, then the one with the larger crowding distance is selected, i.e. the condition is satisfied, and .
In other words, when two individuals belong to different nondomination rank, the individual with a lower (better) rank is selected; when the two individuals have the same rank, the one with less crowding distance is selected.
In the push search stage, a constrained optimization problem is optimized without considering any constraints. The pseudocode of push search is given in Algorithm 2. In line 2, nondominated sorting is carried out on the suppopulation . In lines 36, a number of solutions are selected into until the number of solutions in is greater than . Lines 79 select solutions into from . In line 10, the suppopulation is updated by setting .
In the pull search stage, the constrained optimization problem is optimized by considering constraints, which is able to pull the population to the feasible and nondominated regions. The pseudocode of push search is given in Algorithm 3.
The mechanism described in Eq. (4) controls the search process to switch from the push to the pull search. At the beginning of the evolutionary process, the value of is initialized to in order to ensure a thorough search in the push stage. The value of is updated by Eq.(4). When the value of is less than or equal to the preset threshold , the search behavior is changed.
In the pull stage, we need to prevent the population from falling into local optimum, and balance evolutionary search between feasible and infeasible regions. To achieve these goals, an improved epsilon constrainthandling mechanism fan2017improved and the dominance laumanns2002combining are used to deal with constraints and objectives in the pull search stage, with the detailed formula given as follows.
An improved epsilon constrainthandling mechanism is defined as follows:
(9) 
where is the value of function, is the overall constraint violation of the top th individual in the initial population, is the proportion of feasible solutions in the generation . controls the speed when the relaxation of constraints reduces in the case of (). controls the searching preference between the feasible and infeasible regions. is used to control the reducing interval of relaxation of constraints in the case of . stops updating until the generation counter reaches generation . is set to the maximum overall constraint violation when the push search finishes. In the case of , Eq.(9) sets with an exponential decreasing speed, which has a potential to find feasible solutions more efficiently than the setting in takahama2006constrained . In the case of , the Eq.(9) has the same setting as adopted in takahama2006constrained . In the pull search stage, a new individual is selected using the constraint handling mechanism described by Algorithm 3.
3.3 PPS Embedded in M2M
(a) (b) (c) 
(d) (e) (f) 
The pseudocode of PPSM2M is introduced in Algorithm 4. The algorithm is initialized at lines 13. At line 2, the population of a CMOP is divided into subpopulations, and the number of individuals for each subpopulation is equal to . At line 3, the maximum rate of change of ideal and nadir points is initialized to , and the flag of search stage is set to push (=). The algorithm runs repeatedly from line 4 to 38 until the termination condition is met. Lines 513 describe the process of generating new solutions for each subpopulation. A number of new solutions are generated at lines 610. At lines 1213, The solution set is allocated to each subpopulation according to Eq. (7). The max rate of change between the ideal and nadir points during the last generations is calculated at line 15. The parameter is updated at lines 1625. The updating process for each subpopulation is described in lines 2636. If the size of subpopulation is less than , then individual solutions are randomly selected from and added to . If the size of subpopulation is greater than , then solutions are selected out by using the PPS framework. More specifically, at the push search stage, individual solutions are selected out by employing nondominated sorting method without considering any constraints, as illustrated in line 31. At the pull search stage, individual solutions are selected out by using an improved epsilon constrainthandling approach, as illustrated in line 33. The generation counter is updated at line 37. At line 39, A set of nondominated and feasible solutions is selected out.
As an example, the search behavior of PPSM2M is illustrated in Fig. 2, which can be summarized as follows. At first, five direction vectors are uniformly sampled in the objective space, in which 5 nonadjacent subregions are constructed. The working subpopulations have achieved the unconstrained PF without considering any constraints in the push search, as illustrated by Fig. 2(a)(c). It is notable that in this particular case some solutions located on the unconstrained PF are feasible. In the pull search stage, the infeasible solutions are pulled to the feasible and nondominant regions, as illustrated by Fig. 2(d)(f).
4 Experimental Study
4.1 Experimental Settings
The proposed PPSM2M is compared with the other six algorithms(M2M Liu2014Decomposition , MOEA/DEpsilon Yang:2014vt , MOEA/DSR jan2013study , MOEA/DCDP jan2013study , CMOEA/D asafuddoula2012adaptive and NSGAIICDP 996017 ) on LIRCMOP114 fan2017improved . The experimental parameters are listed as follows:

Population size: .

Halting condition: each algorithm runs for 30 times independently, and stops when 300,000 function evaluations are reached.

For the biobjective instances, . For the triobjective instances, .

The direction vectors are uniformly selected from the unit sphere in the first octant.

Parameter setting in domination: .

Parameter setting in an improved epsilon constrainthandling mechanism: , .

Parameter setting in MOEA/DIEpsilon: , , , and .

Parameter setting in MOEA/DEpsilon: , , and .

Parameter setting in MOEA/DSR: .
4.2 Performance Metric
In order to measure the performance of the proposed algorithm(PPSM2M) and the other six algorithms(M2M Liu2014Decomposition , MOEA/DEpsilon Yang:2014vt , MOEA/DSR jan2013study , MOEA/DCDP jan2013study , CMOEA/D asafuddoula2012adaptive and NSGAIICDP 996017 ), two performance indicators are used: the inverted generation distance (IGD) bosman2003balance and the hypervolumeZitzler1999Multiobjective .

Inverted Generational Distance (IGD):
Inverted Generational Distance(IGD) is an inverse mapping of Generational Distance(GD). It is expressed by the average distance from the individual in Pareto optimal solution set to the nondominant solution set obtained by the algorithm. Therefore, the calculation formula is
(10) 
where is a set of representative solutions in the true PF. represents the minimum Euclidean distance from point on the Pareto optimal surface to individual in . The smaller the IGD value, the better the performance of the algorithm.

Hypervolume ():
has become a popular evaluation index, which reflects the closeness of the set of nondominated solutions achieved by a CMOEA to the true PF. The performance of CMOEA is evaluated by calculating the hypervolume of the space surrounded by the nondominant solution set and the reference point. The calculation formula is as follows:
(11) 
where is the Lebesgue measure, denotes the number of objectives, is a userdefined reference point in the objective space. The bigger the HV value, the better the performance of the algorithm. The reference point is placed at 1.2 times the distance to the nadir point of the true PF. A larger value of HV indicates better performance regarding diversity and/or convergence.
4.3 Discussion of Experiments
Table 1 and Table 3 show the IGD and HV values of the seven algorithms on LIRCMOP114. According to the Friedman aligned test, PPSM2M achieves the highest ranking among the seven CMOEAs. The values calculated by the statistics of the Friedman aligned test are close to zero, which reveals the significant differences among the seven algorithms.
Table 2 and Table 4 show adjusted values of IGD and HV values for the Friedman Aligned test, and PPSM2M is the control method. To compare the statistical difference among PPSM2M and the other six algorithms, we perform a series of posthoc tests. Since each adjusted value in Table 2 and Table 4 is less than the preset significant level 0.05. To control the FamilyWise Error Rate (FWER), a set of posthoc procedures, including the Holm procedure Sture1979A , the Holland procedure holland1987improved , the Finner procedure Finner1993On , the Hochberg procedure hochberg1988sharper , the Hommel procedure hommel1988stagewise , the Rom procedure Rom1990A and the Li procedure Li2008A , are used according to Derrac2011A . We can conclude that PPSM2M is significantly better than the other six algorithms.
Test Instance  PPSM2M  M2M  Epsilon  CDP  SR  CMOEA/D  NSGAIICDP  
LIRCMOP1  mean  2.341E02  3.106E02  5.74E02  1.11E01  1.81E02  1.26E01  3.23E01 
std  8.369E03  9.888E03  2.89E02  5.04E02  1.66E02  7.03E02  7.33E02  
LIRCMOP2  mean  1.604E02  2.742E02  5.39E02  1.43E01  9.63E03  1.40E01  3.03E01 
std  7.606E03  8.596E03  2.13E02  5.55E02  7.23E03  5.44E02  7.24E02  
LIRCMOP3  mean  3.330E02  4.383E02  8.81E02  2.61E01  1.78E01  2.80E01  4.08E01 
std  1.274E02  1.445E02  4.36E02  4.33E02  7.20E02  4.21E02  1.15E01  
LIRCMOP4  mean  3.738E02  4.187E02  6.51E02  2.53E01  1.95E01  2.59E01  3.85E01 
std  1.421E02  2.371E02  3.01E02  4.34E02  6.40E02  3.51E02  1.35E01  
LIRCMOP5  mean  8.343E03  2.941E01  1.15E+00  1.05E+00  1.04E+00  1.10E+00  5.53E01 
std  1.750E03  5.013E01  1.98E01  3.63E01  3.66E01  2.99E01  6.88E01  
LIRCMOP6  mean  9.631E03  5.356E01  1.27E+00  1.09E+00  9.43E01  1.31E+00  5.74E01 
std  1.711E03  5.509E01  2.95E01  5.20E01  5.90E01  2.08E01  4.21E01  
LIRCMOP7  mean  9.335E03  5.237E01  1.51E+00  1.46E+00  1.08E+00  1.56E+00  2.38E01 
std  2.459E03  7.677E01  5.09E01  5.58E01  7.58E01  4.24E01  4.06E01  
LIRCMOP8  mean  9.351E03  7.924E01  1.62E+00  1.38E+00  1.01E+00  1.58E+00  6.02E01 
std  2.143E03  7.455E01  3.05E01  6.15E01  7.24E01  3.71E01  7.39E01  
LIRCMOP9  mean  2.886E01  4.599E01  4.90E01  4.81E01  4.85E01  4.81E01  6.44E01 
std  1.239E01  6.426E02  4.22E02  5.24E02  4.78E02  5.24E02  1.60E02  
LIRCMOP10  mean  1.894E02  2.291E01  2.13E01  2.16E01  1.92E01  2.13E01  5.97E01 
std  5.322E02  8.123E02  5.32E02  6.81E02  6.81E02  4.63E02  3.20E02  
LIRCMOP11  mean  1.194E02  4.400E01  3.47E01  3.42E01  3.16E01  3.81E01  4.87E01 
std  3.574E02  1.038E01  9.28E02  9.22E02  7.49E02  8.95E02  1.05E02  
LIRCMOP12  mean  8.071E02  1.488E01  2.52E01  2.69E01  2.06E01  2.50E+00  5.80E01 
std  6.031E02  5.757E02  8.98E02  9.06E02  5.61E02  9.63E02  1.17E01  
LIRCMOP13  mean  1.858E01  9.751E01  1.20E+00  1.21E+00  8.86E01  1.18E+00  1.39E+01 
std  3.009E02  5.292E01  3.06E01  3.17E01  5.76E01  3.78E01  2.26E+00  
LIRCMOP14  mean  1.759E01  9.429E01  1.02E+00  1.11E+00  1.03E+00  1.25E+00  1.36E+01 
std  3.257E02  5.152E01  4.86E01  3.98E01  4.70E01  5.30E02  2.17E+00  
Friedman Test  1.1429  2.9286  4.6786  4.8929  3.1429  5.5714  5.6429  
i  algorithm  unadjusted  

1  NSGAIICDP  0  0  0  0  0  0  0  0 
2  CMOEA/D  0  0  0  0  0  0  0  0 
3  CDP  0.000004  0.000017  0.000017  0.000017  0.000017  0.000017  0.000009  0.000005 
4  Epsilon  0.000015  0.000045  0.000045  0.000045  0.000045  0.000045  0.000022  0.000015 
5  SR  0.014306  0.028612  0.028612  0.028612  0.028407  0.028612  0.017142  0.014515 
6  M2M  0.028739  0.028739  0.028739  0.028739  0.028739  0.028739  0.028739  0.028739 
Test Instance  PPSM2M  M2M  Epsilon  CDP  SR  CMOEA/D  NSGAIICDP  
LIRCMOP1  mean  1.003E+00  9.897E01  9.590E01  7.540E01  9.960E01  7.410E01  5.160E01 
std  7.312E03  1.322E02  3.280E02  8.950E02  2.910E02  1.220E01  5.570E02  
LIRCMOP2  mean  1.334E+00  1.321E+00  1.280E+00  1.060E+00  1.340E+00  1.070E+00  8.240E01 
std  1.083E02  1.124E02  2.880E02  1.080E01  1.470E02  9.100E02  1.150E01  
LIRCMOP3  mean  8.499E01  8.407E01  7.980E01  4.860E01  5.910E01  4.710E01  4.080E01 
std  1.423E02  1.787E02  3.930E02  4.310E02  1.070E01  4.090E02  2.880E02  
LIRCMOP4  mean  1.059E+00  1.051E+00  1.020E+00  7.350E01  8.150E01  7.310E01  6.170E01 
std  1.831E02  3.870E02  4.190E02  5.440E02  8.700E02  5.160E02  1.060E01  
LIRCMOP5  mean  1.451E+00  1.084E+00  4.300E02  1.630E01  1.820E01  9.720E02  9.390E01 
std  4.778E03  6.097E01  2.350E01  4.430E01  4.390E01  3.700E01  3.210E01  
LIRCMOP6  mean  1.119E+00  5.337E01  5.400E02  1.880E01  3.020E01  2.330E02  4.130E01 
std  2.051E03  3.987E01  2.210E01  3.870E01  4.620E01  1.280E01  1.890E01  
LIRCMOP7  mean  3.002E+00  2.021E+00  3.030E01  3.740E01  9.880E01  2.040E01  2.400E+00 
std  8.630E03  1.349E+00  9.070E01  9.580E01  1.270E+00  7.520E01  6.520E01  
LIRCMOP8  mean  3.002E+00  1.498E+00  1.060E01  5.170E01  1.100E+00  1.660E01  1.900E+00 
std  5.415E03  1.277E+00  5.490E01  1.050E+00  1.200E+00  6.110E01  7.560E01  
LIRCMOP9  mean  3.287E+00  2.826E+00  2.740E+00  2.770E+00  2.750E+00  2.770E+00  2.060E+00 
std  1.926E01  2.114E01  1.480E01  1.840E01  1.640E01  1.840E01  1.080E02  
LIRCMOP10  mean  3.212E+00  2.845E+00  2.890E+00  2.880E+00  2.930E+00  2.890E+00  2.040E+00 
std  8.954E02  1.720E01  1.020E01  1.360E01  1.350E01  9.770E02  4.450E02  
LIRCMOP11  mean  4.359E+00  3.115E+00  3.340E+00  3.350E+00  3.380E+00  3.240E+00  3.110E+00 
std  1.167E01  3.003E01  2.570E01  2.570E01  2.900E01  2.550E01  1.540E02  
LIRCMOP12  mean  5.449E+00  5.221E+00  4.880E+00  4.830E+00  5.030E+00  4.890E+00  3.280E+00 
std  1.852E01  1.878E01  3.170E01  3.280E01  1.750E01  3.450E01  3.610E01  
LIRCMOP13  mean  4.856E+00  1.518E+00  4.550E01  4.630E01  1.890E+00  6.290E01  0.000E+00 
std  1.378E01  2.084E+00  1.300E+00  1.420E+00  2.570E+00  1.710E+00  0.000E+00  
LIRCMOP14  mean  5.425E+00  1.744E+00  1.330E+00  8.810E01  1.270E+00  1.800E01  0.000E+00 
std  1.240E01  2.314E+00  2.450E+00  1.970E+00  2.290E+00  2.600E01  0.000E+00  
Friedman Test  1.0714  2.9286  4.8214  4.8929  3.2143  5.3571  5.7143 
i  algorithm  unadjusted  

1  NSGAIICDP  0  0  0  0  0  0  0  0 
2  CMOEA/D  0  0.000001  0.000001  0.000001  0.000001  0.000001  0  0 
3  CDP  0.000003  0.000011  0.000011  0.000009  0.000011  0.000011  0.000006  0.000003 
4  Epsilon  0.000004  0.000013  0.000013  0.000013  0.000013  0.000013  0.000007  0.000004 
5  SR  0.008679  0.017358  0.017358  0.017358  0.017282  0.017358  0.010406  0.008804 
6  M2M  0.022934  0.022934  0.022934  0.022934  0.022934  0.022934  0.022934  0.022934 
In order to further discuss the advantages of the proposed PPSM2M in solving CMOPs, we plot nondominated solutions achieved by each algorithm on LIRCMOP2, LIRCMOP7 and LIRCMOP11 with the median HV values. The feasible and infeasible regions of LIRCMOP2, LIRCMOP7 and LIRCMOP11, corresponding to three different types of difficulties fan2019dascmop , are plotted in Fig. 3.
As shown in Fig. 3(a), the feasible region of LIRCMOP2 is very small, which is a feasibilityhard problem. Nondominated solutions achieved by each algorithm on LIRCMOP2 with the median HV values are plotted in Fig. 4. We can observe that the proposed PPSM2M can converge to the true PF and has good diversity, as shown in the Fig. 4(a). The convergence of M2M is worse than that of PPSM2M, as shown in Fig. 4(b). For the rest of five algorithms, their diversity is worse than that of PPSM2M, as shown in Fig. 4(c)(g).
In LIRCMOP7, there are three large nonoverlapping infeasible regions in the front of the unconstrained PF, as illustrated in Fig. 3(b). In addition, the unconstrained PF is covered by one of the infeasible regions, which is a convergencehard problem. The results of PPSM2M and the other six algorithms (M2M, MOEA/DEpsilon, MOEA/DSR, MOEA/DCDP, CMOEA/D and NSGAIICDP) on LIRCMOP7 are shown in Fig. 5. We can see that only PPSM2M and M2M can get across the infeasible regions to reach the true PF, as illustrated in Fig. 5(a)(b), while the other five algorithms (MOEA/DEpsilon, MOEA/DSR, MOEA/DCDP, CMOEA/D and NSGAIICDP) cannot converge to the true PF, as illustrated by Fig. 5(c)(g). A possible reason is that two large infeasible regions in front of the PF hinder the way of populations of the five algorithms towards to the constrained PF.
The PF of LIRCMOP11 is discrete as shown in Fig. 3(c), which is a problem with diversityhardness. There are seven Pareto optimal solutions. Two solutions are located on the unconstrained PF, and five solutions are located on the constrained PF. Fig. 6 shows the results of the seven tested algorithms on LIRCMOP11. From Fig. 6(a)(b), we can see that only PPSM2M and M2M can find all the Pareto optimal solutions. Furthermore, the PPSM2M has good convergence compared to the M2M. A possible reason is that each suppopulation are combined into a whole population which is evolved by employing the improved epsilon constrainthandling method and the dominance technique at the last ten percentages of the maximum generation. The other five algorithms (MOEA/DEpsilon, MOEA/DSR, MOEA/DCDP, CMOEA/D and NSGAIICDP) find only a part of Pareto optimal solutions, because infeasible regions block the way of populations of the other five algorithms towards to the true PF.
Based on the above observations and analysis, we can conclude that the proposed PPSM2M outperforms the other six CMOEAs on most of the test instances.
(a) LIRCMOP2 (b) LIRCMOP7 (c) LIRCMOP11 
(a) PPSM2M (b) M2M (c) MOEA/DEpsilon (d) MOEA/DCDP 
(e) MOEA/DSR (f) CMOEA/D (g) NSGAIICDP 
(a) PPSM2M (b) M2M (c) MOEA/DEpsilon (d) MOEA/DCDP 
(e) MOEA/DSR (f) CMOEA/D (g) NSGAIICDP 
(a) PPSM2M (b) M2M (c) MOEA/DEpsilon (d) MOEA/DCDP 
(e) MOEA/DSR (f) CMOEA/D (g) NSGAIICDP 
5 Conclusion
This paper proposes a new algorithm (PPSM2M) that combines a multiobjective to multiobjective (M2M) decomposition approach with a push and pull search (PPS) framework to deal with CMOPs. To be more specific, the search process of PPSM2M is divided into two stages—namely, push and pull search processes. At the push search stage, PPSM2M uses the M2M decomposition method to decompose a CMOP into a set of simple CMOPs which correspond to a set of subpopulations. Each simple CMOP is solved in a collaborative manner without considering any constraints, which can help the subpopulations effortlessly get across infeasible regions. Furthermore, some constrained landscape information can be estimated during the push search stage, such as the ratio of feasible to infeasible solutions and the maximum overall constraint violation, which can be further employed to guide the parameter settings of constrainthandling mechanisms in the pull search stage. When the max rate of change between ideal and nadir points is less than or equal to a predefined threshold, the search process is switched to the pull search stage. At the beginning of the pull search stage, the infeasible solutions of each suppopulation achieved in the push stage are pulled to the feasible and nondominated regions by adopting an improved epsilon constrainthandling approach. At the last ten percentages of the maximum generation, each suppopulation are combined into a whole population which is evolved by employing the improved epsilon constrainthandling method and the dominance technique. The comprehensive experimental results demonstrate that the proposed PPSM2M outperforms the other six CMOEAs on most of the benchmark problems significantly.
It is also worth noting that there are few studies regarding using the M2M decomposition method to solve CMOPs. In this paper, the proposed PPSM2M provides a feasible method. Obviously, improving the performance of PPSM2M requires a lot of work, such as, enhanced constrainthandling mechanisms in the pull search stage, and integrating machine learning methods to allocate computational resources dynamically into subpopulations of the PPSM2M method. In addition, some other benchmark problems (e.g. CF1CF10
zhang2008multiobjective ) and realworld optimization problems will be investigated by using the proposed PPSM2M.Acknowledgement
This work was supported in part by the National Natural Science Foundation of China under Grant (61175073, 61300159, 61332002, 51375287) , the Guangdong Key Laboratory of Digital signal and Image Processing, the Science and Technology Planning Project of Guangdong Province (2013B011304002)and the Project of Educational Commission of Guangdong Province, China 2015KGJHZ014).
Reference
References
 (1) D. Kalyanmoy, et al., Multi objective optimization using evolutionary algorithms, John Wiley and Sons, 2001.

(2)
K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGAII, IEEE Transactions on Evolutionary Computation 6 (2) (2002) 182–197.
 (3) X. Cai, Z. Mei, Z. Fan, Q. Zhang, A constrained decomposition approach with grids for evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation 22 (4) (2018) 564–577.
 (4) X. Cai, Z. Mei, Z. Fan, A decompositionbased manyobjective evolutionary algorithm with two types of adjustments for direction vectors, IEEE Transactions on Cybernetics 48 (8) (2018) 2335–2348.
 (5) Z. Fan, Y. Fang, W. Li, X. Cai, C. Wei, E. Goodman, MOEA/D with anglebased constrained dominance principle for constrained multiobjective optimization problems, Applied Soft Computing 74 (2019) 621–633.
 (6) Z. Fan, W. Li, X. Cai, H. Huang, Y. Fang, Y. You, J. Mo, C. Wei, E. Goodman, An improved epsilon constrainthandling method in MOEA/D for CMOPs with large infeasible regions, Soft Computing (2017) 1–20.
 (7) Y. Wang, J.P. Li, X. Xue, B.C. Wang, Utilizing the correlation between constraints and objective function for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation.
 (8) Z.Z. Liu, Y. Wang, Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces, IEEE Transactions on Evolutionary Computation.
 (9) G. Wang, X. Cai, Z. Cui, G. Min, J. Chen, High performance computing for cyber physical social systems by using evolutionary multiobjective optimization algorithm, IEEE Transactions on Emerging Topics in Computing (2018) 1–1doi:10.1109/TETC.2017.2703784.

(10)
X. Cai, Y. Li, Z. Fan, Q. Zhang, An external archive guided multiobjective evolutionary algorithm based on decomposition for combinatorial optimization, IEEE Transactions on Evolutionary Computation 19 (4) (2015) 508–523.
 (11) K. Deb, R. Datta, A fast and accurate solution of constrained optimization problems using a hybrid biobjective and penalty function approach, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8.
 (12) K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (24) (2000) 311–338.
 (13) T. P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.
 (14) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with an archive and gradientbased mutation, in: Evolutionary Computation (CEC), 2010 IEEE Congress on, IEEE, 2010, pp. 1–9.
 (15) Y. Wang, Z. Cai, G. Guo, Y. Zhou, Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 37 (3) (2007) 560–575.
 (16) A. Homaifar, C. X. Qi, S. H. Lai, Constrained optimization via genetic algorithms, Simulation 62 (4) (1994) 242–253.
 (17) J. A. Joines, C. R. Houck, On the use of nonstationary penalty functions to solve nonlinear constrained optimization problems with GA’s, in: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, IEEE, 1994, pp. 579–584.
 (18) G.G. Wang, Y. Tan, Improving metaheuristic algorithms with information feedback models, IEEE Transactions on Cybernetics (49) (2017) 542–555.
 (19) Y. G. Woldesenbet, G. G. Yen, B. G. Tessema, Constraint handling in multiobjective evolutionary optimization, IEEE Transactions on Evolutionary Computation 13 (3) (2009) 514–525.
 (20) E. MezuraMontes, J. VelázquezReyes, C. C. Coello, Modified differential evolution for constrained optimization, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 25–32.
 (21) E. MezuraMontes, A. G. PalomequeOrtiz, Parameter control in differential evolution for constrained optimization, in: 2009 IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 1375–1382.
 (22) Z. Fan, J. Liu, T. Sorensen, P. Wang, Improved differential evolution based on stochastic ranking for robust layout synthesis of MEMS components, IEEE Transactions on Industrial Electronics 56 (4) (2009) 937–948.
 (23) G. Leguizamón, C. A. C. Coello, A boundary search based ACO algorithm coupled with stochastic ranking, in: 2007 IEEE Congress on Evolutionary Computation, IEEE, 2007, pp. 165–172.
 (24) H.L. Liu, F. Gu, Q. Zhang, Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems, IEEE Transactions on Evolutionary Computation 18 (3) (2014) 450–455.
 (25) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Push and pull search for solving constrained multiobjective optimization problems, Swarm and Evolutionary Computation 44 (2019) 665–679.
 (26) M. Laumanns, L. Thiele, K. Deb, E. Zitzler, Combining convergence and diversity in evolutionary multiobjective optimization, Evolutionary Computation 10 (3) (2002) 263–282.
 (27) Z. Yang, X. Cai, Z. Fan, Epsilon constrained method for constrained multiobjective optimization problems: some preliminary results, in: Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, ACM, 2014, pp. 1181–1186.
 (28) M. A. Jan, R. A. Khanum, A study of two penaltyparameterless constraint handling techniques in the framework of MOEA/D, Applied Soft Computing 13 (1) (2013) 128–148.
 (29) M. Asafuddoula, T. Ray, R. Sarker, K. Alam, An adaptive constraint handling approach embedded MOEA/D, in: 2012 IEEE Congress on Evolutionary Computation, IEEE, 2012, pp. 1–8.
 (30) R. M. RizkAllah, R. A. ElSehiemy, G.G. Wang, A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution, Applied Soft Computing 63 (2018) 206–222.
 (31) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with gradientbased mutation and feasible elites, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 1–8.
 (32) P. A. Bosman, D. Thierens, The balance between proximity and diversity in multiobjective evolutionary algorithms, Evolutionary Computation, IEEE Transactions on 7 (2) (2003) 174–188.
 (33) E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Transactions on Evolutionary Computation 3 (4) (1999) 257–271.
 (34) S. Holm, A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6 (2) (1979) 65–70.
 (35) B. S. Holland, M. D. Copenhaver, An improved sequentially rejective bonferroni test procedure, Biometrics (1987) 417–423.
 (36) H. Finner, On a monotonicity problem in stepdown multiple test procedures, Publications of the American Statistical Association 88 (423) (1993) 920–923.
 (37) Y. Hochberg, A sharper bonferroni procedure for multiple tests of significance, Biometrika 75 (4) (1988) 800–802.
 (38) G. Hommel, A stagewise rejective multiple test procedure based on a modified bonferroni test, Biometrika 75 (2) (1988) 383–386.
 (39) D. M. Rom, A sequentially rejective test procedure based on a modified bonferroni inequality, Biometrika 77 (3) (1990) 663–665.
 (40) J. Li, A twostep rejection procedure for testing multiple hypotheses, Journal of Statistical Planning and Inference 138 (6) (2008) 1521–1527.
 (41) J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation 1 (1) (2011) 3–18.
 (42) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Difficulty adjustable and scalable constrained multiobjective test problem toolkit, Evolutionary Computation (2019) accepted.
 (43) Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, S. Tiwari, Multiobjective optimization test instances for the cec 2009 special session and competition, University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multiobjective optimization algorithms, technical report 264.
Comments
There are no comments yet.