 # Push and Pull Search Embedded in an M2M Framework for Solving Constrained Multi-objective Optimization Problems

In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Many real-world optimization problems can be formulated as constrained multi-objective optimization problems (CMOPs), which can be defined as follows kalyanmoy2001multi :

 ⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩minimizeF(x)=(f1(x),…,fm(x))Tsubject togi(x)≥0,i=1,…,qhj(x)=0,j=1,…,px∈Rn (1)

where is an

-dimensional objective vector, and

. is an inequality constraint, and is the number of inequality constraints. is an equality constraint, and represents the number of equality constraints. is an -dimensional decision vector.

In order to solve CMOPs with equality constraints, the equality constrains are often transformed into inequality constraints by using an extremely small positive number as follows:

 hj(x)′≡δ−|hj(x)|≥0 (2)

To deal with a set of constraints in CMOPs, an overall constraint violation is employed as follows:

 ϕ(x)=q∑i=1|min(gi(x),0)|+p∑j=1|min(hj(x)′,0)| (3)

Given a solution , if , is a feasible solution, otherwise it is infeasible. All feasible solutions form a feasible solution set.

To deal with constraints, Deb et al.996017 defined a - principle (CDP), that is, if a solution is said to constrained-dominate a solution , one of the following conditions must be met:

1. Solution is a feasible solution and solution is an infeasible solution.

2. Both solution and solution are infeasible solutions, and the overall constraint violation of solution is smaller than that of solution .

3. Both solution and solution are feasible solutions, and solution dominates solution in terms of objectives.

A feasible solution set is constituted by all feasible solutions. Given a solution , if there is no any other solution satisfying , is called a Pareto optimal solution. All Pareto optimal solutions constitute a Pareto set (PS). The set of the mapping vectors of PS in the objective space is called a Pareto front (PF), which is defined as .

Over the past decade, a lot of research has been done in the field of multi-objective evolutionary algorithms (MOEAs) cai2018constrained cai2018decomposition . However, few works have been done in solving CMOPs. Recently, several constrained multi-objective evolutionary algorithms (CMOEAs) Fan2016Angle fan2017improved wang2019utilizing liu2019handling have been proposed to solve CMOPs. CMOEAs are particularly suitable for solving CMOPs, because they can find a number of Pareto optimal solutions in a single run, and are not affected by the mathematical properties of the objective functions. Therefore, the use of evolutionary algorithms to solve multi-objective optimization problems 7927724 cai2015external has become a research hot-spot in recent years.

To better balance minimizing the objectives and satisfying the constraints for CMOPs, many constraint-handling mechanisms have been designed, such as penalty function method 5586543 , CDP deb2000efficient , stochastic ranking runarsson2000stochastic , -constrained takahama2010constrained , and multi-objective concepts wang2007multiobjective . For example, the penalty function methods transform constrained optimization problems into an unconstrained optimization problem by adding constraints multiplied by penalty factors to the objectives. If penalty factors remain constant throughout the optimization process for a period of time, it is called a static penalty method homaifar1994constrained . If penalty factors are constructed as a function of the number of iterations or the iteration time, it is called a dynamic penalty approach joines1994use . If penalty factors change according to the feedback information wang2017improving during the search process, it is called an adaptive penalty approach woldesenbet2009constraint .

Deb proposed a constraint-handling method called CDP deb2000efficient , in which the fitness of a feasible solution is always better than that of an infeasible solution. Subsequently, CDP was extended to differential evolution (DE) by Mezura-Montes et al. mezura2006modified to select target vectors and trial vectors. Moreover, CDP was used for designing parameter control in DE for constrained optimization mezura2009parameter .

In order to overcome the weakness of penalty constraint-handling methods, stochastic ranking was proposed runarsson2000stochastic

, which uses a bubble-sort-like process to deal with the constrained optimization problems. Stochastic ranking uses a probability parameter

to determine whether the comparison is based on the objectives or based on the constraints. In the case of , stochastic ranking is similar to the behavior of the feasibility rules. Furthermore, it can couple with various algorithms. For example, stochastic ranking has been combined with DE fan2009improved and ant colony optimization leguizamon2007boundary .

In this paper, we propose a new approach, namely PPS-M2M, to solve CMOPs, which combines a M2M approach Liu2014Decomposition with push and pull search (PPS) framework fan2018push . Unlike other constraint-handling mechanisms, the PPS-M2M decomposes a CMOP into a number of simple constrained multi-objective optimization sub-problems in the initial phase. Each sub-problem corresponds to a sup-population. During the search process, each sub-population evolves in a collaborative manner to ensure the diversity of the population. Inspired by the idea of information feedback model, some information about the constrained landscape is collected to help the parameters setting in the constraint-handling mechanisms.

In addition, the PPS-M2M divides the search process into two different phases. In the first phase, each sub-population approaches as close as possible to the unconstrained PF without considering any constraints. In the second phase, each sub-population is pulled back to the constrained PF using some constraint-handling approaches. The pull search stage is divided into two parts: (1) Only an improved epsilon constraint-handling mechanism is used fan2017improved to optimize each subproblem for the first 90% generations; (2) In the last 10% of the generations, all sub-populations are merged into one population. Then the -dominance laumanns2002combining and an improved epsilon constraint-handling mechanism fan2017improved work together to evolve the population. In summary,  the proposed PPS-M2M has the following advantages.

1. At the beginning of the search, the method decomposes the population into a set of sub-populations, and each sub-population searches for a different multi-objective sub-problem in a coordinated manner. In other words, the PFs of all these subproblems constitute the PF of a CMOP. So the computational complexity is reduced by limiting the operator to focus on each subpopulation, and the convergence and diversity of the population are effectively ensured.

2. When dealing with constraints, the method follows a procedure of ”ignore the constraints first and consider the constraints second”, so that infeasible regions encountered a distance before the true PF present literally no extra barriers for the working population.

3. Since the landscape of constraints has been probed in the unconstrained push searching stage, this information can be employed to guide the parameter settings for mechanisms of constraint handling in the pull search stage.

The remainder of this paper is organized as follows. Section 2 introduces the general idea of PPS framework and the M2M decomposition approach. Section 3 gives an instantiation of PPS-M2M. Section 4 designs a set of experiments to compare the proposed PPS-M2M with the other six CMOEAs, including M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 . Finally, conclusions are drawn in section 5.

## 2 Related Work

In this section, we introduce the general idea of PPS framework and the M2M population decomposition approach, which are used in the rest of this paper.

### 2.1 The general idea of PPS framework

The push and pull search (PPS) framework was introduced by Fan et al. fan2018push . Unlike other constraint handling mechanisms, the search process of PPS is divided into two different stages: the push search and the pull search, and follows a procedure of ”push first and pull second”, by which the working population is pushed toward the unconstrained PF without considering any constraints in the push stage, and a constraint handling mechanism is used to pull the working population to the constrained PF in the pull stage.

In order to convert from the push search stage to the pull search stage, the following condition is applied

 rk≡max{rzk,rnk}≤ϵ (4)

where is a threshold, which is defined by users. In Eq. (4), we set in this work. During the last generations, is the rate of change of the ideal point according to Eq. (5), and is the rate of change of the nadir point according to Eq. (6).

 rzk=maxi=1,…,m{|zki−zk−li|max{|zk−li|,Δ}} (5)
 rnk=maxi=1,…,m{|nki−nk−li|max{|nk−li|,Δ}} (6)

where are the ideal and nadir points in the -th generation. , are the ideal and nadir points in the -th generation. and are two points in the interval . is a very small positive number, which is used to make sure that the denominators in Eq.(5) and Eq.(6) are not equal to zero. In this paper, is set to . When is less than , the push search stage is completed and the pull search stage is ready to start.

The major advantages of the PPS framework include:

1. In the push stage, the working population conveniently gets across infeasible regions without considering any constraints, which voids the impact of infeasible regions encountered a distance before the true PF.

2. Since the landscape of constraints has already been estimated in the push search stage, valuable information can be collected to guide the parameter settings in the pull search stage, which not only facilitates the parameter settings of the algorithm, but also enhances its adaptability when dealing with CMOPs.

### 2.2 The M2M population decomposition approach

Employing a number of sub-populations to solve problems in a collaborative way rizk2018novel is a widely used approach, which can help an algorithm balance its convergence and diversity. One of the most popular methods is the M2M population decomposition approach Liu2014Decomposition , which decomposes a multi-objective optimization problems into a number of simple multi-objective optimization subproblems in the initialization, then solves these sub-problems simultaneously in a coordinated manner. For this purpose, unit vectors in are chosen in the first octant of the objective space. Then is divided into subregions , where is

 Ωk={u∈Rm+|⟨u,vk⟩≤⟨u,vj⟩ for any j=1,...,K} (7)

where is the acute angle between and . Therefore, the population is decomposed into sub-populations, each sub-population searches for a different multi-objective subproblem. Subproblem is defined as:

 ⎧⎪⎨⎪⎩minimizeF(x)=(f1(x),...,fm(x))subject tox∈∏ni=1[ai,bi]F(x)∈Ωk (8)

Altogether there are subproblems, and each subproblem is solved by employing a sub-population. Moreover, each sub-population has individuals. In order to keep individuals for each subproblem, some selection strategies are used. If sub-population has less than individuals, then individuals from (the entire population) are randomly selected and added to .

A major advantage of M2M is that it can effectively balance diversity and convergence at each generation by decomposing a multi-objective optimization problem into multiple simple multi-objective optimization problems.

## 3 An Instantiation of PPS-M2M

This section describes the details of an instantiation, which combines the M2M approach with the PPS framework in a non-dominated sorting framework to solve CMOPs.

### 3.1 The M2M approach

In the initial stage, PPS-M2M uses a decomposition strategy to decompose a CMOP into a set of sub-problems that are solved in a collaborative manner with each sub-problem corresponding to a sub-population. For the sake of simplicity, we assume that all the objective functions of a CMOP are non-negative. Otherwise, the objective function is replaced by , where is the minimum value of the objective function found so far.

Assuming that the objective function space is divided into sub-regions, direction vectors

are uniformly distributed on the first octant of the unit sphere, where

is the center of the th subregion. Then the objective function space is divided into non-adjacent sub-regions , where the th sub-regions can be obtained by the Eq.(8). Through such a decomposition method, the multi-objective optimization problem (Eq. (1)) can be decomposed into simple CMOPs. The procedure is described in Algorithm 1. As an example when the objective number and the number of sub-regions , the population decomposition is illustrated in Fig. 1, where are six evenly distributed direction vectors.

### 3.2 The PPS framework

The search process of the PPS framework is divided into two main search stages: the push search stage and the pull search stage. In the push search stage, each sub-population uses an unconstrained NSGA-II to search for non-dominated solutions without considering any constraints. When using unconstrained NSGA-II to solve simple multi-objective optimization problems, an individual is measured by two attributes 996017 , including the non-domination rank and the crowding distance . Since individuals have two attributes, when an individual is compared to another individual , if one of the following two requirements is satisfied, then individual can enter the descendant population.

1. If the individual dominates the individual , i.e. ;

2. If the individual and the individual are non-dominated by each other, then the one with the larger crowding distance is selected, i.e. the condition is satisfied, and .

In other words, when two individuals belong to different non-domination rank, the individual with a lower (better) rank is selected; when the two individuals have the same rank, the one with less crowding distance is selected.

In the push search stage, a constrained optimization problem is optimized without considering any constraints. The pseudocode of push search is given in Algorithm 2. In line 2, non-dominated sorting is carried out on the sup-population . In lines 3-6, a number of solutions are selected into until the number of solutions in is greater than . Lines 7-9 select solutions into from . In line 10, the sup-population is updated by setting .

In the pull search stage, the constrained optimization problem is optimized by considering constraints, which is able to pull the population to the feasible and non-dominated regions. The pseudocode of push search is given in Algorithm 3.

The mechanism described in Eq. (4) controls the search process to switch from the push to the pull search. At the beginning of the evolutionary process, the value of is initialized to in order to ensure a thorough search in the push stage. The value of is updated by Eq.(4). When the value of is less than or equal to the preset threshold , the search behavior is changed.

In the pull stage, we need to prevent the population from falling into local optimum, and balance evolutionary search between feasible and infeasible regions. To achieve these goals, an improved epsilon constraint-handling mechanism fan2017improved and the -dominance laumanns2002combining are used to deal with constraints and objectives in the pull search stage, with the detailed formula given as follows.

An improved epsilon constraint-handling mechanism is defined as follows:

 ϵ(k)=⎧⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪⎩ϕθif  k=0(1−τ)ε(k−1) if rfk<αε(0)(1−kTc)cp  if rfk≥α0      otherwise (9)

where is the value of function, is the overall constraint violation of the top -th individual in the initial population, is the proportion of feasible solutions in the generation . controls the speed when the relaxation of constraints reduces in the case of (). controls the searching preference between the feasible and infeasible regions. is used to control the reducing interval of relaxation of constraints in the case of . stops updating until the generation counter reaches generation . is set to the maximum overall constraint violation when the push search finishes. In the case of , Eq.(9) sets with an exponential decreasing speed, which has a potential to find feasible solutions more efficiently than the setting in takahama2006constrained . In the case of , the Eq.(9) has the same setting as adopted in takahama2006constrained . In the pull search stage, a new individual is selected using the constraint handling mechanism described by Algorithm 3.

### 3.3 PPS Embedded in M2M Figure 2: Infeasible regions make the original unconstrained PF partially feasible. The objective space is divided into 5 sub-regions, and 5 direction vector v1,...,v5 are uniformly distributed on the first octant of the unit sphere. The objective space is divided into 5 non-adjacent sub-regions M1,...,M5. (a)-(c) show the push search process, in which the working population of each sub-region crosses infeasible regions without any barriers. The pull search process, in which the infeasible solutions in the working population of each sub-region is pulled to the feasible and non-dominant regions, as shown in (d)-(f).

The pseudo-code of PPS-M2M is introduced in Algorithm 4. The algorithm is initialized at lines 1-3. At line 2, the population of a CMOP is divided into sub-populations, and the number of individuals for each sub-population is equal to . At line 3, the maximum rate of change of ideal and nadir points is initialized to , and the flag of search stage is set to push (=). The algorithm runs repeatedly from line 4 to 38 until the termination condition is met. Lines 5-13 describe the process of generating new solutions for each sub-population. A number of new solutions are generated at lines 6-10. At lines 12-13, The solution set is allocated to each sub-population according to Eq. (7). The max rate of change between the ideal and nadir points during the last generations is calculated at line 15. The parameter is updated at lines 16-25. The updating process for each sub-population is described in lines 26-36. If the size of sub-population is less than , then individual solutions are randomly selected from and added to . If the size of sub-population is greater than , then solutions are selected out by using the PPS framework. More specifically, at the push search stage, individual solutions are selected out by employing non-dominated sorting method without considering any constraints, as illustrated in line 31. At the pull search stage, individual solutions are selected out by using an improved epsilon constraint-handling approach, as illustrated in line 33. The generation counter is updated at line 37. At line 39, A set of non-dominated and feasible solutions is selected out.

As an example, the search behavior of PPS-M2M is illustrated in Fig. 2, which can be summarized as follows. At first, five direction vectors are uniformly sampled in the objective space, in which 5 non-adjacent sub-regions are constructed. The working sub-populations have achieved the unconstrained PF without considering any constraints in the push search, as illustrated by Fig. 2(a)-(c). It is notable that in this particular case some solutions located on the unconstrained PF are feasible. In the pull search stage, the infeasible solutions are pulled to the feasible and non-dominant regions, as illustrated by Fig. 2(d)-(f).

## 4 Experimental Study

### 4.1 Experimental Settings

The proposed PPS-M2M is compared with the other six algorithms(M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 ) on LIR-CMOP1-14 fan2017improved . The experimental parameters are listed as follows:

1. Population size: .

2. Halting condition: each algorithm runs for 30 times independently, and stops when 300,000 function evaluations are reached.

3. For the bi-objective instances, . For the tri-objective instances, .

4. The direction vectors are uniformly selected from the unit sphere in the first octant.

5. Parameter setting in -domination: .

6. Parameter setting in an improved epsilon constraint-handling mechanism: , .

7. Parameter setting in MOEA/D-IEpsilon: , , , and .

8. Parameter setting in MOEA/D-Epsilon: , , and .

9. Parameter setting in MOEA/D-SR: .

### 4.2 Performance Metric

In order to measure the performance of the proposed algorithm(PPS-M2M) and the other six algorithms(M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 ), two performance indicators are used: the inverted generation distance (IGD) bosman2003balance and the hypervolumeZitzler1999Multiobjective .

• Inverted Generational Distance (IGD):

Inverted Generational Distance(IGD) is an inverse mapping of Generational Distance(GD). It is expressed by the average distance from the individual in Pareto optimal solution set to the non-dominant solution set obtained by the algorithm. Therefore, the calculation formula is

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩IGD(P∗,A)=∑y∗∈P∗d(y∗,A)|P∗|d(y∗,A)=miny∈A{√∑mi=1(y∗i−yi)2} (10)

where is a set of representative solutions in the true PF. represents the minimum Euclidean distance from point on the Pareto optimal surface to individual in . The smaller the IGD value, the better the performance of the algorithm.

• Hypervolume ():

has become a popular evaluation index, which reflects the closeness of the set of non-dominated solutions achieved by a CMOEA to the true PF. The performance of CMOEA is evaluated by calculating the hypervolume of the space surrounded by the non-dominant solution set and the reference point. The calculation formula is as follows:

 HV(S)=VOL(⋃x∈S[f1(x),zr1]×...[fm(x),zrm]) (11)

where is the Lebesgue measure, denotes the number of objectives, is a user-defined reference point in the objective space. The bigger the HV value, the better the performance of the algorithm. The reference point is placed at 1.2 times the distance to the nadir point of the true PF. A larger value of HV indicates better performance regarding diversity and/or convergence.

### 4.3 Discussion of Experiments

Table 1 and Table 3 show the IGD and HV values of the seven algorithms on LIR-CMOP1-14. According to the Friedman aligned test, PPS-M2M achieves the highest ranking among the seven CMOEAs. The -values calculated by the statistics of the Friedman aligned test are close to zero, which reveals the significant differences among the seven algorithms.

Table 2 and Table 4 show adjusted -values of IGD and HV values for the Friedman Aligned test, and PPS-M2M is the control method. To compare the statistical difference among PPS-M2M and the other six algorithms, we perform a series of post-hoc tests. Since each adjusted value in Table 2 and Table 4 is less than the preset significant level 0.05. To control the Family-Wise Error Rate (FWER), a set of post-hoc procedures, including the Holm procedure Sture1979A , the Holland procedure holland1987improved , the Finner procedure Finner1993On , the Hochberg procedure hochberg1988sharper , the Hommel procedure hommel1988stagewise , the Rom procedure Rom1990A and the Li procedure Li2008A , are used according to Derrac2011A . We can conclude that PPS-M2M is significantly better than the other six algorithms.

In order to further discuss the advantages of the proposed PPS-M2M in solving CMOPs, we plot non-dominated solutions achieved by each algorithm on LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11 with the median HV values. The feasible and infeasible regions of LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11, corresponding to three different types of difficulties fan2019dascmop , are plotted in Fig. 3.

As shown in Fig. 3(a), the feasible region of LIR-CMOP2 is very small, which is a feasibility-hard problem. Non-dominated solutions achieved by each algorithm on LIR-CMOP2 with the median HV values are plotted in Fig. 4. We can observe that the proposed PPS-M2M can converge to the true PF and has good diversity, as shown in the Fig. 4(a). The convergence of M2M is worse than that of PPS-M2M, as shown in Fig. 4(b). For the rest of five algorithms, their diversity is worse than that of PPS-M2M, as shown in Fig. 4(c)-(g).

In LIR-CMOP7, there are three large non-overlapping infeasible regions in the front of the unconstrained PF, as illustrated in Fig. 3(b). In addition, the unconstrained PF is covered by one of the infeasible regions, which is a convergence-hard problem. The results of PPS-M2M and the other six algorithms (M2M, MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) on LIR-CMOP7 are shown in Fig. 5. We can see that only PPS-M2M and M2M can get across the infeasible regions to reach the true PF, as illustrated in Fig. 5(a)-(b), while the other five algorithms (MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) cannot converge to the true PF, as illustrated by Fig. 5(c)-(g). A possible reason is that two large infeasible regions in front of the PF hinder the way of populations of the five algorithms towards to the constrained PF.

The PF of LIR-CMOP11 is discrete as shown in Fig. 3(c), which is a problem with diversity-hardness. There are seven Pareto optimal solutions. Two solutions are located on the unconstrained PF, and five solutions are located on the constrained PF. Fig. 6 shows the results of the seven tested algorithms on LIR-CMOP11. From Fig. 6(a)-(b), we can see that only PPS-M2M and M2M can find all the Pareto optimal solutions. Furthermore, the PPS-M2M has good convergence compared to the M2M. A possible reason is that each sup-population are combined into a whole population which is evolved by employing the improved epsilon constraint-handling method and the -dominance technique at the last ten percentages of the maximum generation. The other five algorithms (MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) find only a part of Pareto optimal solutions, because infeasible regions block the way of populations of the other five algorithms towards to the true PF.

Based on the above observations and analysis, we can conclude that the proposed PPS-M2M outperforms the other six CMOEAs on most of the test instances. Figure 3: Illustrations of the feasible and infeasible regions of LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11, corresponding to three different types of difficulties as discussed in fan2019dascmop Figure 4: The non-dominated solutions achieved by each algorithm on LIR-CMOP2 with the median HV values. Figure 5: The non-dominated solutions achieved by each algorithm on LIR-CMOP7 with the median HV values. Figure 6: The non-dominated solutions achieved by each algorithm on LIR-CMOP11 with the median HV values.

## 5 Conclusion

This paper proposes a new algorithm (PPS-M2M) that combines a multi-objective to multi-objective (M2M) decomposition approach with a push and pull search (PPS) framework to deal with CMOPs. To be more specific, the search process of PPS-M2M is divided into two stages—namely, push and pull search processes. At the push search stage, PPS-M2M uses the M2M decomposition method to decompose a CMOP into a set of simple CMOPs which correspond to a set of sub-populations. Each simple CMOP is solved in a collaborative manner without considering any constraints, which can help the sub-populations effortlessly get across infeasible regions. Furthermore, some constrained landscape information can be estimated during the push search stage, such as the ratio of feasible to infeasible solutions and the maximum overall constraint violation, which can be further employed to guide the parameter settings of constraint-handling mechanisms in the pull search stage. When the max rate of change between ideal and nadir points is less than or equal to a predefined threshold, the search process is switched to the pull search stage. At the beginning of the pull search stage, the infeasible solutions of each sup-population achieved in the push stage are pulled to the feasible and non-dominated regions by adopting an improved epsilon constraint-handling approach. At the last ten percentages of the maximum generation, each sup-population are combined into a whole population which is evolved by employing the improved epsilon constraint-handling method and the -dominance technique. The comprehensive experimental results demonstrate that the proposed PPS-M2M outperforms the other six CMOEAs on most of the benchmark problems significantly.

It is also worth noting that there are few studies regarding using the M2M decomposition method to solve CMOPs. In this paper, the proposed PPS-M2M provides a feasible method. Obviously, improving the performance of PPS-M2M requires a lot of work, such as, enhanced constraint-handling mechanisms in the pull search stage, and integrating machine learning methods to allocate computational resources dynamically into sub-populations of the PPS-M2M method. In addition, some other benchmark problems (e.g. CF1-CF10

zhang2008multiobjective ) and real-world optimization problems will be investigated by using the proposed PPS-M2M.

## Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant (61175073, 61300159, 61332002, 51375287) , the Guangdong Key Laboratory of Digital signal and Image Processing, the Science and Technology Planning Project of Guangdong Province (2013B011304002)and the Project of Educational Commission of Guangdong Province, China 2015KGJHZ014).

## References

• (1) D. Kalyanmoy, et al., Multi objective optimization using evolutionary algorithms, John Wiley and Sons, 2001.
• (2)

K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation 6 (2) (2002) 182–197.

• (3) X. Cai, Z. Mei, Z. Fan, Q. Zhang, A constrained decomposition approach with grids for evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation 22 (4) (2018) 564–577.
• (4) X. Cai, Z. Mei, Z. Fan, A decomposition-based many-objective evolutionary algorithm with two types of adjustments for direction vectors, IEEE Transactions on Cybernetics 48 (8) (2018) 2335–2348.
• (5) Z. Fan, Y. Fang, W. Li, X. Cai, C. Wei, E. Goodman, MOEA/D with angle-based constrained dominance principle for constrained multi-objective optimization problems, Applied Soft Computing 74 (2019) 621–633.
• (6) Z. Fan, W. Li, X. Cai, H. Huang, Y. Fang, Y. You, J. Mo, C. Wei, E. Goodman, An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions, Soft Computing (2017) 1–20.
• (7) Y. Wang, J.-P. Li, X. Xue, B.-C. Wang, Utilizing the correlation between constraints and objective function for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation.
• (8) Z.-Z. Liu, Y. Wang, Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces, IEEE Transactions on Evolutionary Computation.
• (9) G. Wang, X. Cai, Z. Cui, G. Min, J. Chen, High performance computing for cyber physical social systems by using evolutionary multi-objective optimization algorithm, IEEE Transactions on Emerging Topics in Computing (2018) 1–1doi:10.1109/TETC.2017.2703784.
• (10)

X. Cai, Y. Li, Z. Fan, Q. Zhang, An external archive guided multiobjective evolutionary algorithm based on decomposition for combinatorial optimization, IEEE Transactions on Evolutionary Computation 19 (4) (2015) 508–523.

• (11) K. Deb, R. Datta, A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8.
• (12) K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2-4) (2000) 311–338.
• (13) T. P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.
• (14) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with an archive and gradient-based mutation, in: Evolutionary Computation (CEC), 2010 IEEE Congress on, IEEE, 2010, pp. 1–9.
• (15) Y. Wang, Z. Cai, G. Guo, Y. Zhou, Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 37 (3) (2007) 560–575.
• (16) A. Homaifar, C. X. Qi, S. H. Lai, Constrained optimization via genetic algorithms, Simulation 62 (4) (1994) 242–253.
• (17) J. A. Joines, C. R. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s, in: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, IEEE, 1994, pp. 579–584.
• (18) G.-G. Wang, Y. Tan, Improving metaheuristic algorithms with information feedback models, IEEE Transactions on Cybernetics (49) (2017) 542–555.
• (19) Y. G. Woldesenbet, G. G. Yen, B. G. Tessema, Constraint handling in multiobjective evolutionary optimization, IEEE Transactions on Evolutionary Computation 13 (3) (2009) 514–525.
• (20) E. Mezura-Montes, J. Velázquez-Reyes, C. C. Coello, Modified differential evolution for constrained optimization, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 25–32.
• (21) E. Mezura-Montes, A. G. Palomeque-Ortiz, Parameter control in differential evolution for constrained optimization, in: 2009 IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 1375–1382.
• (22) Z. Fan, J. Liu, T. Sorensen, P. Wang, Improved differential evolution based on stochastic ranking for robust layout synthesis of MEMS components, IEEE Transactions on Industrial Electronics 56 (4) (2009) 937–948.
• (23) G. Leguizamón, C. A. C. Coello, A boundary search based ACO algorithm coupled with stochastic ranking, in: 2007 IEEE Congress on Evolutionary Computation, IEEE, 2007, pp. 165–172.
• (24) H.-L. Liu, F. Gu, Q. Zhang, Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems, IEEE Transactions on Evolutionary Computation 18 (3) (2014) 450–455.
• (25) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Push and pull search for solving constrained multi-objective optimization problems, Swarm and Evolutionary Computation 44 (2019) 665–679.
• (26) M. Laumanns, L. Thiele, K. Deb, E. Zitzler, Combining convergence and diversity in evolutionary multiobjective optimization, Evolutionary Computation 10 (3) (2002) 263–282.
• (27) Z. Yang, X. Cai, Z. Fan, Epsilon constrained method for constrained multiobjective optimization problems: some preliminary results, in: Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, ACM, 2014, pp. 1181–1186.
• (28) M. A. Jan, R. A. Khanum, A study of two penalty-parameterless constraint handling techniques in the framework of MOEA/D, Applied Soft Computing 13 (1) (2013) 128–148.
• (29) M. Asafuddoula, T. Ray, R. Sarker, K. Alam, An adaptive constraint handling approach embedded MOEA/D, in: 2012 IEEE Congress on Evolutionary Computation, IEEE, 2012, pp. 1–8.
• (30) R. M. Rizk-Allah, R. A. El-Sehiemy, G.-G. Wang, A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution, Applied Soft Computing 63 (2018) 206–222.
• (31) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with gradient-based mutation and feasible elites, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 1–8.
• (32) P. A. Bosman, D. Thierens, The balance between proximity and diversity in multiobjective evolutionary algorithms, Evolutionary Computation, IEEE Transactions on 7 (2) (2003) 174–188.
• (33) E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Transactions on Evolutionary Computation 3 (4) (1999) 257–271.
• (34) S. Holm, A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6 (2) (1979) 65–70.
• (35) B. S. Holland, M. D. Copenhaver, An improved sequentially rejective bonferroni test procedure, Biometrics (1987) 417–423.
• (36) H. Finner, On a monotonicity problem in step-down multiple test procedures, Publications of the American Statistical Association 88 (423) (1993) 920–923.
• (37) Y. Hochberg, A sharper bonferroni procedure for multiple tests of significance, Biometrika 75 (4) (1988) 800–802.
• (38) G. Hommel, A stagewise rejective multiple test procedure based on a modified bonferroni test, Biometrika 75 (2) (1988) 383–386.
• (39) D. M. Rom, A sequentially rejective test procedure based on a modified bonferroni inequality, Biometrika 77 (3) (1990) 663–665.
• (40) J. Li, A two-step rejection procedure for testing multiple hypotheses, Journal of Statistical Planning and Inference 138 (6) (2008) 1521–1527.
• (41) J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation 1 (1) (2011) 3–18.
• (42) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Difficulty adjustable and scalable constrained multi-objective test problem toolkit, Evolutionary Computation (2019) accepted.
• (43) Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, S. Tiwari, Multiobjective optimization test instances for the cec 2009 special session and competition, University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multi-objective optimization algorithms, technical report 264.