Push and Pull Search Embedded in an M2M Framework for Solving Constrained Multi-objective Optimization Problems

06/02/2019 ∙ by Zhun Fan, et al. ∙ 0

In dealing with constrained multi-objective optimization problems (CMOPs), a key issue of multi-objective evolutionary algorithms (MOEAs) is to balance the convergence and diversity of working populations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many real-world optimization problems can be formulated as constrained multi-objective optimization problems (CMOPs), which can be defined as follows kalyanmoy2001multi :

(1)

where is an

-dimensional objective vector, and

. is an inequality constraint, and is the number of inequality constraints. is an equality constraint, and represents the number of equality constraints. is an -dimensional decision vector.

In order to solve CMOPs with equality constraints, the equality constrains are often transformed into inequality constraints by using an extremely small positive number as follows:

(2)

To deal with a set of constraints in CMOPs, an overall constraint violation is employed as follows:

(3)

Given a solution , if , is a feasible solution, otherwise it is infeasible. All feasible solutions form a feasible solution set.

To deal with constraints, Deb et al.996017 defined a - principle (CDP), that is, if a solution is said to constrained-dominate a solution , one of the following conditions must be met:

  1. Solution is a feasible solution and solution is an infeasible solution.

  2. Both solution and solution are infeasible solutions, and the overall constraint violation of solution is smaller than that of solution .

  3. Both solution and solution are feasible solutions, and solution dominates solution in terms of objectives.

A feasible solution set is constituted by all feasible solutions. Given a solution , if there is no any other solution satisfying , is called a Pareto optimal solution. All Pareto optimal solutions constitute a Pareto set (PS). The set of the mapping vectors of PS in the objective space is called a Pareto front (PF), which is defined as .

Over the past decade, a lot of research has been done in the field of multi-objective evolutionary algorithms (MOEAs) cai2018constrained cai2018decomposition . However, few works have been done in solving CMOPs. Recently, several constrained multi-objective evolutionary algorithms (CMOEAs) Fan2016Angle fan2017improved wang2019utilizing liu2019handling have been proposed to solve CMOPs. CMOEAs are particularly suitable for solving CMOPs, because they can find a number of Pareto optimal solutions in a single run, and are not affected by the mathematical properties of the objective functions. Therefore, the use of evolutionary algorithms to solve multi-objective optimization problems 7927724 cai2015external has become a research hot-spot in recent years.

To better balance minimizing the objectives and satisfying the constraints for CMOPs, many constraint-handling mechanisms have been designed, such as penalty function method 5586543 , CDP deb2000efficient , stochastic ranking runarsson2000stochastic , -constrained takahama2010constrained , and multi-objective concepts wang2007multiobjective . For example, the penalty function methods transform constrained optimization problems into an unconstrained optimization problem by adding constraints multiplied by penalty factors to the objectives. If penalty factors remain constant throughout the optimization process for a period of time, it is called a static penalty method homaifar1994constrained . If penalty factors are constructed as a function of the number of iterations or the iteration time, it is called a dynamic penalty approach joines1994use . If penalty factors change according to the feedback information wang2017improving during the search process, it is called an adaptive penalty approach woldesenbet2009constraint .

Deb proposed a constraint-handling method called CDP deb2000efficient , in which the fitness of a feasible solution is always better than that of an infeasible solution. Subsequently, CDP was extended to differential evolution (DE) by Mezura-Montes et al. mezura2006modified to select target vectors and trial vectors. Moreover, CDP was used for designing parameter control in DE for constrained optimization mezura2009parameter .

In order to overcome the weakness of penalty constraint-handling methods, stochastic ranking was proposed runarsson2000stochastic

, which uses a bubble-sort-like process to deal with the constrained optimization problems. Stochastic ranking uses a probability parameter

to determine whether the comparison is based on the objectives or based on the constraints. In the case of , stochastic ranking is similar to the behavior of the feasibility rules. Furthermore, it can couple with various algorithms. For example, stochastic ranking has been combined with DE fan2009improved and ant colony optimization leguizamon2007boundary .

In this paper, we propose a new approach, namely PPS-M2M, to solve CMOPs, which combines a M2M approach Liu2014Decomposition with push and pull search (PPS) framework fan2018push . Unlike other constraint-handling mechanisms, the PPS-M2M decomposes a CMOP into a number of simple constrained multi-objective optimization sub-problems in the initial phase. Each sub-problem corresponds to a sup-population. During the search process, each sub-population evolves in a collaborative manner to ensure the diversity of the population. Inspired by the idea of information feedback model, some information about the constrained landscape is collected to help the parameters setting in the constraint-handling mechanisms.

In addition, the PPS-M2M divides the search process into two different phases. In the first phase, each sub-population approaches as close as possible to the unconstrained PF without considering any constraints. In the second phase, each sub-population is pulled back to the constrained PF using some constraint-handling approaches. The pull search stage is divided into two parts: (1) Only an improved epsilon constraint-handling mechanism is used fan2017improved to optimize each subproblem for the first 90% generations; (2) In the last 10% of the generations, all sub-populations are merged into one population. Then the -dominance laumanns2002combining and an improved epsilon constraint-handling mechanism fan2017improved work together to evolve the population. In summary,  the proposed PPS-M2M has the following advantages.

  1. At the beginning of the search, the method decomposes the population into a set of sub-populations, and each sub-population searches for a different multi-objective sub-problem in a coordinated manner. In other words, the PFs of all these subproblems constitute the PF of a CMOP. So the computational complexity is reduced by limiting the operator to focus on each subpopulation, and the convergence and diversity of the population are effectively ensured.

  2. When dealing with constraints, the method follows a procedure of ”ignore the constraints first and consider the constraints second”, so that infeasible regions encountered a distance before the true PF present literally no extra barriers for the working population.

  3. Since the landscape of constraints has been probed in the unconstrained push searching stage, this information can be employed to guide the parameter settings for mechanisms of constraint handling in the pull search stage.

The remainder of this paper is organized as follows. Section 2 introduces the general idea of PPS framework and the M2M decomposition approach. Section 3 gives an instantiation of PPS-M2M. Section 4 designs a set of experiments to compare the proposed PPS-M2M with the other six CMOEAs, including M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 . Finally, conclusions are drawn in section 5.

2 Related Work

In this section, we introduce the general idea of PPS framework and the M2M population decomposition approach, which are used in the rest of this paper.

2.1 The general idea of PPS framework

The push and pull search (PPS) framework was introduced by Fan et al. fan2018push . Unlike other constraint handling mechanisms, the search process of PPS is divided into two different stages: the push search and the pull search, and follows a procedure of ”push first and pull second”, by which the working population is pushed toward the unconstrained PF without considering any constraints in the push stage, and a constraint handling mechanism is used to pull the working population to the constrained PF in the pull stage.

In order to convert from the push search stage to the pull search stage, the following condition is applied

(4)

where is a threshold, which is defined by users. In Eq. (4), we set in this work. During the last generations, is the rate of change of the ideal point according to Eq. (5), and is the rate of change of the nadir point according to Eq. (6).

(5)
(6)

where are the ideal and nadir points in the -th generation. , are the ideal and nadir points in the -th generation. and are two points in the interval . is a very small positive number, which is used to make sure that the denominators in Eq.(5) and Eq.(6) are not equal to zero. In this paper, is set to . When is less than , the push search stage is completed and the pull search stage is ready to start.

The major advantages of the PPS framework include:

  1. In the push stage, the working population conveniently gets across infeasible regions without considering any constraints, which voids the impact of infeasible regions encountered a distance before the true PF.

  2. Since the landscape of constraints has already been estimated in the push search stage, valuable information can be collected to guide the parameter settings in the pull search stage, which not only facilitates the parameter settings of the algorithm, but also enhances its adaptability when dealing with CMOPs.

2.2 The M2M population decomposition approach

Employing a number of sub-populations to solve problems in a collaborative way rizk2018novel is a widely used approach, which can help an algorithm balance its convergence and diversity. One of the most popular methods is the M2M population decomposition approach Liu2014Decomposition , which decomposes a multi-objective optimization problems into a number of simple multi-objective optimization subproblems in the initialization, then solves these sub-problems simultaneously in a coordinated manner. For this purpose, unit vectors in are chosen in the first octant of the objective space. Then is divided into subregions , where is

(7)

where is the acute angle between and . Therefore, the population is decomposed into sub-populations, each sub-population searches for a different multi-objective subproblem. Subproblem is defined as:

(8)

Altogether there are subproblems, and each subproblem is solved by employing a sub-population. Moreover, each sub-population has individuals. In order to keep individuals for each subproblem, some selection strategies are used. If sub-population has less than individuals, then individuals from (the entire population) are randomly selected and added to .

A major advantage of M2M is that it can effectively balance diversity and convergence at each generation by decomposing a multi-objective optimization problem into multiple simple multi-objective optimization problems.

3 An Instantiation of PPS-M2M

This section describes the details of an instantiation, which combines the M2M approach with the PPS framework in a non-dominated sorting framework to solve CMOPs.

3.1 The M2M approach

In the initial stage, PPS-M2M uses a decomposition strategy to decompose a CMOP into a set of sub-problems that are solved in a collaborative manner with each sub-problem corresponding to a sub-population. For the sake of simplicity, we assume that all the objective functions of a CMOP are non-negative. Otherwise, the objective function is replaced by , where is the minimum value of the objective function found so far.

Assuming that the objective function space is divided into sub-regions, direction vectors

are uniformly distributed on the first octant of the unit sphere, where

is the center of the th subregion. Then the objective function space is divided into non-adjacent sub-regions , where the th sub-regions can be obtained by the Eq.(8). Through such a decomposition method, the multi-objective optimization problem (Eq. (1)) can be decomposed into simple CMOPs. The procedure is described in Algorithm 1. As an example when the objective number and the number of sub-regions , the population decomposition is illustrated in Fig. 1, where are six evenly distributed direction vectors.

1Function result = AllocationSubPop(,)
2       for  to  do
3             Initialize as the solutions in whose -values are in according to Eq. (8);
4             // : a set of individual solutions and their -values.
5       end for
6      return ;
7 end
Algorithm 1 Allocation of Individuals to Sub-populations
Figure 1: The population decomposition manner.

3.2 The PPS framework

The search process of the PPS framework is divided into two main search stages: the push search stage and the pull search stage. In the push search stage, each sub-population uses an unconstrained NSGA-II to search for non-dominated solutions without considering any constraints. When using unconstrained NSGA-II to solve simple multi-objective optimization problems, an individual is measured by two attributes 996017 , including the non-domination rank and the crowding distance . Since individuals have two attributes, when an individual is compared to another individual , if one of the following two requirements is satisfied, then individual can enter the descendant population.

  1. If the individual dominates the individual , i.e. ;

  2. If the individual and the individual are non-dominated by each other, then the one with the larger crowding distance is selected, i.e. the condition is satisfied, and .

In other words, when two individuals belong to different non-domination rank, the individual with a lower (better) rank is selected; when the two individuals have the same rank, the one with less crowding distance is selected.

In the push search stage, a constrained optimization problem is optimized without considering any constraints. The pseudocode of push search is given in Algorithm 2. In line 2, non-dominated sorting is carried out on the sup-population . In lines 3-6, a number of solutions are selected into until the number of solutions in is greater than . Lines 7-9 select solutions into from . In line 10, the sup-population is updated by setting .

1Function PushSubproblems(,)
2        nondominated-sort;
3       ;
4      
5       ;
6       ;
7       calculate crowding-distance in ;
8       sort solutions in by crowding-distance in a descending order
9       ;
10       ;
11 end
Algorithm 2 Push Subproblems

In the pull search stage, the constrained optimization problem is optimized by considering constraints, which is able to pull the population to the feasible and non-dominated regions. The pseudocode of push search is given in Algorithm 3.

The mechanism described in Eq. (4) controls the search process to switch from the push to the pull search. At the beginning of the evolutionary process, the value of is initialized to in order to ensure a thorough search in the push stage. The value of is updated by Eq.(4). When the value of is less than or equal to the preset threshold , the search behavior is changed.

In the pull stage, we need to prevent the population from falling into local optimum, and balance evolutionary search between feasible and infeasible regions. To achieve these goals, an improved epsilon constraint-handling mechanism fan2017improved and the -dominance laumanns2002combining are used to deal with constraints and objectives in the pull search stage, with the detailed formula given as follows.

An improved epsilon constraint-handling mechanism is defined as follows:

(9)

where is the value of function, is the overall constraint violation of the top -th individual in the initial population, is the proportion of feasible solutions in the generation . controls the speed when the relaxation of constraints reduces in the case of (). controls the searching preference between the feasible and infeasible regions. is used to control the reducing interval of relaxation of constraints in the case of . stops updating until the generation counter reaches generation . is set to the maximum overall constraint violation when the push search finishes. In the case of , Eq.(9) sets with an exponential decreasing speed, which has a potential to find feasible solutions more efficiently than the setting in takahama2006constrained . In the case of , the Eq.(9) has the same setting as adopted in takahama2006constrained . In the pull search stage, a new individual is selected using the constraint handling mechanism described by Algorithm 3.

1Function result = PullSubproblems(,,,)
2       //: the maximum generation.
3       //: parameter setting in -domination;
4       ;
5       if   then
6             for  to  do
7                   An improved epsilon constraint-handling mechanism is used to search for non-dominated and feasible solutions in ;
8             end for
9            
10      else
11             An improved epsilon constraint-handling mechanism and the laumanns2002combining are used to search non-dominated and feasible solutions in ;
12       end if
13      return ;
14 end
Algorithm 3 Pull Subproblem

3.3 PPS Embedded in M2M

(a) (b) (c)
(d) (e) (f)
Figure 2: Infeasible regions make the original unconstrained PF partially feasible. The objective space is divided into 5 sub-regions, and 5 direction vector are uniformly distributed on the first octant of the unit sphere. The objective space is divided into 5 non-adjacent sub-regions . (a)-(c) show the push search process, in which the working population of each sub-region crosses infeasible regions without any barriers. The pull search process, in which the infeasible solutions in the working population of each sub-region is pulled to the feasible and non-dominant regions, as shown in (d)-(f).

The pseudo-code of PPS-M2M is introduced in Algorithm 4. The algorithm is initialized at lines 1-3. At line 2, the population of a CMOP is divided into sub-populations, and the number of individuals for each sub-population is equal to . At line 3, the maximum rate of change of ideal and nadir points is initialized to , and the flag of search stage is set to push (=). The algorithm runs repeatedly from line 4 to 38 until the termination condition is met. Lines 5-13 describe the process of generating new solutions for each sub-population. A number of new solutions are generated at lines 6-10. At lines 12-13, The solution set is allocated to each sub-population according to Eq. (7). The max rate of change between the ideal and nadir points during the last generations is calculated at line 15. The parameter is updated at lines 16-25. The updating process for each sub-population is described in lines 26-36. If the size of sub-population is less than , then individual solutions are randomly selected from and added to . If the size of sub-population is greater than , then solutions are selected out by using the PPS framework. More specifically, at the push search stage, individual solutions are selected out by employing non-dominated sorting method without considering any constraints, as illustrated in line 31. At the pull search stage, individual solutions are selected out by using an improved epsilon constraint-handling approach, as illustrated in line 33. The generation counter is updated at line 37. At line 39, A set of non-dominated and feasible solutions is selected out.

As an example, the search behavior of PPS-M2M is illustrated in Fig. 2, which can be summarized as follows. At first, five direction vectors are uniformly sampled in the objective space, in which 5 non-adjacent sub-regions are constructed. The working sub-populations have achieved the unconstrained PF without considering any constraints in the push search, as illustrated by Fig. 2(a)-(c). It is notable that in this particular case some solutions located on the unconstrained PF are feasible. In the pull search stage, the infeasible solutions are pulled to the feasible and non-dominant regions, as illustrated by Fig. 2(d)-(f).

Input:
  1. : the number of subproblems;

  2. unit direction vectors: ,…,;

  3. : the size of each subpopulation;

  4. : a set of individual solutions and their -values;

  5. : the control generation for ;

  6. : parameter setting in -domination;

  7. : the maximum generation.

Output: a set of non-dominated and feasible solutions.
1 Initialization:
2 Decompose a population into sub-populations (), each sub-population contains individuals according to ;
3 Set , = ;
4 while  do
5       for  to  do
6             foreach  do
7                   Randomly choose from ;
8                   Apply genetic operators on and to generate a new solution ;
9                   ;
10                  
11             end foreach
12            ;
13             Use to set according to Eq. (7);
14            
15       end for
16      if  then 
17         Calculate according to Eq. (4) ;
18       if  then
19             if  and ==  then
20                   = false;
21                   = maxViolation;
22            else
23                   Update according to Eq. (9);
24             end if
25            
26      else
27             = 0;
28       end if
29      for  to  do
30             if   then
31                   randomly select solutions from and add them to .
32                  
33            else
34                   if  ==  then
35                         result = ;
36                  else
37                         result = ;
38                   end if
39                  
40             end if
41            
42       end for
43      ;
44      
45 end while
Output the non-dominated and feasible solutions.
Algorithm 4 PPS-M2M

4 Experimental Study

4.1 Experimental Settings

The proposed PPS-M2M is compared with the other six algorithms(M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 ) on LIR-CMOP1-14 fan2017improved . The experimental parameters are listed as follows:

  1. Population size: .

  2. Halting condition: each algorithm runs for 30 times independently, and stops when 300,000 function evaluations are reached.

  3. For the bi-objective instances, . For the tri-objective instances, .

  4. The direction vectors are uniformly selected from the unit sphere in the first octant.

  5. Parameter setting in -domination: .

  6. Parameter setting in an improved epsilon constraint-handling mechanism: , .

  7. Parameter setting in MOEA/D-IEpsilon: , , , and .

  8. Parameter setting in MOEA/D-Epsilon: , , and .

  9. Parameter setting in MOEA/D-SR: .

4.2 Performance Metric

In order to measure the performance of the proposed algorithm(PPS-M2M) and the other six algorithms(M2M Liu2014Decomposition , MOEA/D-Epsilon Yang:2014vt , MOEA/D-SR jan2013study , MOEA/D-CDP jan2013study , C-MOEA/D asafuddoula2012adaptive and NSGA-II-CDP 996017 ), two performance indicators are used: the inverted generation distance (IGD) bosman2003balance and the hypervolumeZitzler1999Multiobjective .

  • Inverted Generational Distance (IGD):

Inverted Generational Distance(IGD) is an inverse mapping of Generational Distance(GD). It is expressed by the average distance from the individual in Pareto optimal solution set to the non-dominant solution set obtained by the algorithm. Therefore, the calculation formula is

(10)

where is a set of representative solutions in the true PF. represents the minimum Euclidean distance from point on the Pareto optimal surface to individual in . The smaller the IGD value, the better the performance of the algorithm.

  • Hypervolume ():

has become a popular evaluation index, which reflects the closeness of the set of non-dominated solutions achieved by a CMOEA to the true PF. The performance of CMOEA is evaluated by calculating the hypervolume of the space surrounded by the non-dominant solution set and the reference point. The calculation formula is as follows:

(11)

where is the Lebesgue measure, denotes the number of objectives, is a user-defined reference point in the objective space. The bigger the HV value, the better the performance of the algorithm. The reference point is placed at 1.2 times the distance to the nadir point of the true PF. A larger value of HV indicates better performance regarding diversity and/or convergence.

4.3 Discussion of Experiments

Table 1 and Table 3 show the IGD and HV values of the seven algorithms on LIR-CMOP1-14. According to the Friedman aligned test, PPS-M2M achieves the highest ranking among the seven CMOEAs. The -values calculated by the statistics of the Friedman aligned test are close to zero, which reveals the significant differences among the seven algorithms.

Table 2 and Table 4 show adjusted -values of IGD and HV values for the Friedman Aligned test, and PPS-M2M is the control method. To compare the statistical difference among PPS-M2M and the other six algorithms, we perform a series of post-hoc tests. Since each adjusted value in Table 2 and Table 4 is less than the preset significant level 0.05. To control the Family-Wise Error Rate (FWER), a set of post-hoc procedures, including the Holm procedure Sture1979A , the Holland procedure holland1987improved , the Finner procedure Finner1993On , the Hochberg procedure hochberg1988sharper , the Hommel procedure hommel1988stagewise , the Rom procedure Rom1990A and the Li procedure Li2008A , are used according to Derrac2011A . We can conclude that PPS-M2M is significantly better than the other six algorithms.

Test Instance PPS-M2M M2M Epsilon CDP SR C-MOEA/D NSGA-II-CDP
LIRCMOP1 mean 2.341E-02 3.106E-02 5.74E-02 1.11E-01 1.81E-02 1.26E-01 3.23E-01
std 8.369E-03 9.888E-03 2.89E-02 5.04E-02 1.66E-02 7.03E-02 7.33E-02
LIRCMOP2 mean 1.604E-02 2.742E-02 5.39E-02 1.43E-01 9.63E-03 1.40E-01 3.03E-01
std 7.606E-03 8.596E-03 2.13E-02 5.55E-02 7.23E-03 5.44E-02 7.24E-02
LIRCMOP3 mean 3.330E-02 4.383E-02 8.81E-02 2.61E-01 1.78E-01 2.80E-01 4.08E-01
std 1.274E-02 1.445E-02 4.36E-02 4.33E-02 7.20E-02 4.21E-02 1.15E-01
LIRCMOP4 mean 3.738E-02 4.187E-02 6.51E-02 2.53E-01 1.95E-01 2.59E-01 3.85E-01
std 1.421E-02 2.371E-02 3.01E-02 4.34E-02 6.40E-02 3.51E-02 1.35E-01
LIRCMOP5 mean 8.343E-03 2.941E-01 1.15E+00 1.05E+00 1.04E+00 1.10E+00 5.53E-01
std 1.750E-03 5.013E-01 1.98E-01 3.63E-01 3.66E-01 2.99E-01 6.88E-01
LIRCMOP6 mean 9.631E-03 5.356E-01 1.27E+00 1.09E+00 9.43E-01 1.31E+00 5.74E-01
std 1.711E-03 5.509E-01 2.95E-01 5.20E-01 5.90E-01 2.08E-01 4.21E-01
LIRCMOP7 mean 9.335E-03 5.237E-01 1.51E+00 1.46E+00 1.08E+00 1.56E+00 2.38E-01
std 2.459E-03 7.677E-01 5.09E-01 5.58E-01 7.58E-01 4.24E-01 4.06E-01
LIRCMOP8 mean 9.351E-03 7.924E-01 1.62E+00 1.38E+00 1.01E+00 1.58E+00 6.02E-01
std 2.143E-03 7.455E-01 3.05E-01 6.15E-01 7.24E-01 3.71E-01 7.39E-01
LIRCMOP9 mean 2.886E-01 4.599E-01 4.90E-01 4.81E-01 4.85E-01 4.81E-01 6.44E-01
std 1.239E-01 6.426E-02 4.22E-02 5.24E-02 4.78E-02 5.24E-02 1.60E-02
LIRCMOP10 mean 1.894E-02 2.291E-01 2.13E-01 2.16E-01 1.92E-01 2.13E-01 5.97E-01
std 5.322E-02 8.123E-02 5.32E-02 6.81E-02 6.81E-02 4.63E-02 3.20E-02
LIRCMOP11 mean 1.194E-02 4.400E-01 3.47E-01 3.42E-01 3.16E-01 3.81E-01 4.87E-01
std 3.574E-02 1.038E-01 9.28E-02 9.22E-02 7.49E-02 8.95E-02 1.05E-02
LIRCMOP12 mean 8.071E-02 1.488E-01 2.52E-01 2.69E-01 2.06E-01 2.50E+00 5.80E-01
std 6.031E-02 5.757E-02 8.98E-02 9.06E-02 5.61E-02 9.63E-02 1.17E-01
LIRCMOP13 mean 1.858E-01 9.751E-01 1.20E+00 1.21E+00 8.86E-01 1.18E+00 1.39E+01
std 3.009E-02 5.292E-01 3.06E-01 3.17E-01 5.76E-01 3.78E-01 2.26E+00
LIRCMOP14 mean 1.759E-01 9.429E-01 1.02E+00 1.11E+00 1.03E+00 1.25E+00 1.36E+01
std 3.257E-02 5.152E-01 4.86E-01 3.98E-01 4.70E-01 5.30E-02 2.17E+00
Friedman Test 1.1429 2.9286 4.6786 4.8929 3.1429 5.5714 5.6429
Table 1: IGD results of PPS-M2M and the other six CMOEAs on LIR-CMOP1-14. To facilitate the display of this table, Epsilon, CDP, and SR in this table are short for MOEA/D-Epsilon, MOEA/D-CDP, and MOEA/D-SR respectively. Friedman test at a 0.05 significance level is performed between PPS-M2M and each of the other six CMOEAs.
i algorithm unadjusted
1 NSGA-II-CDP 0 0 0 0 0 0 0 0
2 C-MOEA/D 0 0 0 0 0 0 0 0
3 CDP 0.000004 0.000017 0.000017 0.000017 0.000017 0.000017 0.000009 0.000005
4 Epsilon 0.000015 0.000045 0.000045 0.000045 0.000045 0.000045 0.000022 0.000015
5 SR 0.014306 0.028612 0.028612 0.028612 0.028407 0.028612 0.017142 0.014515
6 M2M 0.028739 0.028739 0.028739 0.028739 0.028739 0.028739 0.028739 0.028739
Table 2: Adjusted p-values for the Friedman Aligned test in terms of mean metric (IGD).
Test Instance PPS-M2M M2M Epsilon CDP SR C-MOEA/D NSGA-II-CDP
LIRCMOP1 mean 1.003E+00 9.897E-01 9.590E-01 7.540E-01 9.960E-01 7.410E-01 5.160E-01
std 7.312E-03 1.322E-02 3.280E-02 8.950E-02 2.910E-02 1.220E-01 5.570E-02
LIRCMOP2 mean 1.334E+00 1.321E+00 1.280E+00 1.060E+00 1.340E+00 1.070E+00 8.240E-01
std 1.083E-02 1.124E-02 2.880E-02 1.080E-01 1.470E-02 9.100E-02 1.150E-01
LIRCMOP3 mean 8.499E-01 8.407E-01 7.980E-01 4.860E-01 5.910E-01 4.710E-01 4.080E-01
std 1.423E-02 1.787E-02 3.930E-02 4.310E-02 1.070E-01 4.090E-02 2.880E-02
LIRCMOP4 mean 1.059E+00 1.051E+00 1.020E+00 7.350E-01 8.150E-01 7.310E-01 6.170E-01
std 1.831E-02 3.870E-02 4.190E-02 5.440E-02 8.700E-02 5.160E-02 1.060E-01
LIRCMOP5 mean 1.451E+00 1.084E+00 4.300E-02 1.630E-01 1.820E-01 9.720E-02 9.390E-01
std 4.778E-03 6.097E-01 2.350E-01 4.430E-01 4.390E-01 3.700E-01 3.210E-01
LIRCMOP6 mean 1.119E+00 5.337E-01 5.400E-02 1.880E-01 3.020E-01 2.330E-02 4.130E-01
std 2.051E-03 3.987E-01 2.210E-01 3.870E-01 4.620E-01 1.280E-01 1.890E-01
LIRCMOP7 mean 3.002E+00 2.021E+00 3.030E-01 3.740E-01 9.880E-01 2.040E-01 2.400E+00
std 8.630E-03 1.349E+00 9.070E-01 9.580E-01 1.270E+00 7.520E-01 6.520E-01
LIRCMOP8 mean 3.002E+00 1.498E+00 1.060E-01 5.170E-01 1.100E+00 1.660E-01 1.900E+00
std 5.415E-03 1.277E+00 5.490E-01 1.050E+00 1.200E+00 6.110E-01 7.560E-01
LIRCMOP9 mean 3.287E+00 2.826E+00 2.740E+00 2.770E+00 2.750E+00 2.770E+00 2.060E+00
std 1.926E-01 2.114E-01 1.480E-01 1.840E-01 1.640E-01 1.840E-01 1.080E-02
LIRCMOP10 mean 3.212E+00 2.845E+00 2.890E+00 2.880E+00 2.930E+00 2.890E+00 2.040E+00
std 8.954E-02 1.720E-01 1.020E-01 1.360E-01 1.350E-01 9.770E-02 4.450E-02
LIRCMOP11 mean 4.359E+00 3.115E+00 3.340E+00 3.350E+00 3.380E+00 3.240E+00 3.110E+00
std 1.167E-01 3.003E-01 2.570E-01 2.570E-01 2.900E-01 2.550E-01 1.540E-02
LIRCMOP12 mean 5.449E+00 5.221E+00 4.880E+00 4.830E+00 5.030E+00 4.890E+00 3.280E+00
std 1.852E-01 1.878E-01 3.170E-01 3.280E-01 1.750E-01 3.450E-01 3.610E-01
LIRCMOP13 mean 4.856E+00 1.518E+00 4.550E-01 4.630E-01 1.890E+00 6.290E-01 0.000E+00
std 1.378E-01 2.084E+00 1.300E+00 1.420E+00 2.570E+00 1.710E+00 0.000E+00
LIRCMOP14 mean 5.425E+00 1.744E+00 1.330E+00 8.810E-01 1.270E+00 1.800E-01 0.000E+00
std 1.240E-01 2.314E+00 2.450E+00 1.970E+00 2.290E+00 2.600E-01 0.000E+00
Friedman Test 1.0714 2.9286 4.8214 4.8929 3.2143 5.3571 5.7143
Table 3: HV results of PPS-M2M and the other six CMOEAs on LIR-CMOP1-14. To facilitate the display of this table, Epsilon, CDP, and SR in this table are short for MOEA/D-Epsilon, MOEA/D-CDP, and MOEA/D-SR respectively. Friedman test at a 0.05 significance level is performed between PPS-M2M and each of the other six CMOEAs.
i algorithm unadjusted
1 NSGA-II-CDP 0 0 0 0 0 0 0 0
2 C-MOEA/D 0 0.000001 0.000001 0.000001 0.000001 0.000001 0 0
3 CDP 0.000003 0.000011 0.000011 0.000009 0.000011 0.000011 0.000006 0.000003
4 Epsilon 0.000004 0.000013 0.000013 0.000013 0.000013 0.000013 0.000007 0.000004
5 SR 0.008679 0.017358 0.017358 0.017358 0.017282 0.017358 0.010406 0.008804
6 M2M 0.022934 0.022934 0.022934 0.022934 0.022934 0.022934 0.022934 0.022934
Table 4: Adjusted p-values for the Friedman Aligned test in terms of mean metric (HV)

In order to further discuss the advantages of the proposed PPS-M2M in solving CMOPs, we plot non-dominated solutions achieved by each algorithm on LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11 with the median HV values. The feasible and infeasible regions of LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11, corresponding to three different types of difficulties fan2019dascmop , are plotted in Fig. 3.

As shown in Fig. 3(a), the feasible region of LIR-CMOP2 is very small, which is a feasibility-hard problem. Non-dominated solutions achieved by each algorithm on LIR-CMOP2 with the median HV values are plotted in Fig. 4. We can observe that the proposed PPS-M2M can converge to the true PF and has good diversity, as shown in the Fig. 4(a). The convergence of M2M is worse than that of PPS-M2M, as shown in Fig. 4(b). For the rest of five algorithms, their diversity is worse than that of PPS-M2M, as shown in Fig. 4(c)-(g).

In LIR-CMOP7, there are three large non-overlapping infeasible regions in the front of the unconstrained PF, as illustrated in Fig. 3(b). In addition, the unconstrained PF is covered by one of the infeasible regions, which is a convergence-hard problem. The results of PPS-M2M and the other six algorithms (M2M, MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) on LIR-CMOP7 are shown in Fig. 5. We can see that only PPS-M2M and M2M can get across the infeasible regions to reach the true PF, as illustrated in Fig. 5(a)-(b), while the other five algorithms (MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) cannot converge to the true PF, as illustrated by Fig. 5(c)-(g). A possible reason is that two large infeasible regions in front of the PF hinder the way of populations of the five algorithms towards to the constrained PF.

The PF of LIR-CMOP11 is discrete as shown in Fig. 3(c), which is a problem with diversity-hardness. There are seven Pareto optimal solutions. Two solutions are located on the unconstrained PF, and five solutions are located on the constrained PF. Fig. 6 shows the results of the seven tested algorithms on LIR-CMOP11. From Fig. 6(a)-(b), we can see that only PPS-M2M and M2M can find all the Pareto optimal solutions. Furthermore, the PPS-M2M has good convergence compared to the M2M. A possible reason is that each sup-population are combined into a whole population which is evolved by employing the improved epsilon constraint-handling method and the -dominance technique at the last ten percentages of the maximum generation. The other five algorithms (MOEA/D-Epsilon, MOEA/D-SR, MOEA/D-CDP, C-MOEA/D and NSGA-II-CDP) find only a part of Pareto optimal solutions, because infeasible regions block the way of populations of the other five algorithms towards to the true PF.

Based on the above observations and analysis, we can conclude that the proposed PPS-M2M outperforms the other six CMOEAs on most of the test instances.

(a) LIR-CMOP2       (b) LIR-CMOP7       (c) LIR-CMOP11
Figure 3: Illustrations of the feasible and infeasible regions of LIR-CMOP2, LIR-CMOP7 and LIR-CMOP11, corresponding to three different types of difficulties as discussed in fan2019dascmop
(a) PPS-M2M (b) M2M (c) MOEA/D-Epsilon (d) MOEA/D-CDP
(e) MOEA/D-SR (f) C-MOEA/D (g) NSGA-II-CDP
Figure 4: The non-dominated solutions achieved by each algorithm on LIR-CMOP2 with the median HV values.
(a) PPS-M2M (b) M2M (c) MOEA/D-Epsilon (d) MOEA/D-CDP
(e) MOEA/D-SR (f) C-MOEA/D (g) NSGA-II-CDP
Figure 5: The non-dominated solutions achieved by each algorithm on LIR-CMOP7 with the median HV values.
(a) PPS-M2M (b) M2M (c) MOEA/D-Epsilon (d) MOEA/D-CDP
(e) MOEA/D-SR (f) C-MOEA/D (g) NSGA-II-CDP
Figure 6: The non-dominated solutions achieved by each algorithm on LIR-CMOP11 with the median HV values.

5 Conclusion

This paper proposes a new algorithm (PPS-M2M) that combines a multi-objective to multi-objective (M2M) decomposition approach with a push and pull search (PPS) framework to deal with CMOPs. To be more specific, the search process of PPS-M2M is divided into two stages—namely, push and pull search processes. At the push search stage, PPS-M2M uses the M2M decomposition method to decompose a CMOP into a set of simple CMOPs which correspond to a set of sub-populations. Each simple CMOP is solved in a collaborative manner without considering any constraints, which can help the sub-populations effortlessly get across infeasible regions. Furthermore, some constrained landscape information can be estimated during the push search stage, such as the ratio of feasible to infeasible solutions and the maximum overall constraint violation, which can be further employed to guide the parameter settings of constraint-handling mechanisms in the pull search stage. When the max rate of change between ideal and nadir points is less than or equal to a predefined threshold, the search process is switched to the pull search stage. At the beginning of the pull search stage, the infeasible solutions of each sup-population achieved in the push stage are pulled to the feasible and non-dominated regions by adopting an improved epsilon constraint-handling approach. At the last ten percentages of the maximum generation, each sup-population are combined into a whole population which is evolved by employing the improved epsilon constraint-handling method and the -dominance technique. The comprehensive experimental results demonstrate that the proposed PPS-M2M outperforms the other six CMOEAs on most of the benchmark problems significantly.

It is also worth noting that there are few studies regarding using the M2M decomposition method to solve CMOPs. In this paper, the proposed PPS-M2M provides a feasible method. Obviously, improving the performance of PPS-M2M requires a lot of work, such as, enhanced constraint-handling mechanisms in the pull search stage, and integrating machine learning methods to allocate computational resources dynamically into sub-populations of the PPS-M2M method. In addition, some other benchmark problems (e.g. CF1-CF10

zhang2008multiobjective ) and real-world optimization problems will be investigated by using the proposed PPS-M2M.

Acknowledgement

This work was supported in part by the National Natural Science Foundation of China under Grant (61175073, 61300159, 61332002, 51375287) , the Guangdong Key Laboratory of Digital signal and Image Processing, the Science and Technology Planning Project of Guangdong Province (2013B011304002)and the Project of Educational Commission of Guangdong Province, China 2015KGJHZ014).

Reference

References

  • (1) D. Kalyanmoy, et al., Multi objective optimization using evolutionary algorithms, John Wiley and Sons, 2001.
  • (2)

    K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation 6 (2) (2002) 182–197.

  • (3) X. Cai, Z. Mei, Z. Fan, Q. Zhang, A constrained decomposition approach with grids for evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation 22 (4) (2018) 564–577.
  • (4) X. Cai, Z. Mei, Z. Fan, A decomposition-based many-objective evolutionary algorithm with two types of adjustments for direction vectors, IEEE Transactions on Cybernetics 48 (8) (2018) 2335–2348.
  • (5) Z. Fan, Y. Fang, W. Li, X. Cai, C. Wei, E. Goodman, MOEA/D with angle-based constrained dominance principle for constrained multi-objective optimization problems, Applied Soft Computing 74 (2019) 621–633.
  • (6) Z. Fan, W. Li, X. Cai, H. Huang, Y. Fang, Y. You, J. Mo, C. Wei, E. Goodman, An improved epsilon constraint-handling method in MOEA/D for CMOPs with large infeasible regions, Soft Computing (2017) 1–20.
  • (7) Y. Wang, J.-P. Li, X. Xue, B.-C. Wang, Utilizing the correlation between constraints and objective function for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation.
  • (8) Z.-Z. Liu, Y. Wang, Handling constrained multiobjective optimization problems with constraints in both the decision and objective spaces, IEEE Transactions on Evolutionary Computation.
  • (9) G. Wang, X. Cai, Z. Cui, G. Min, J. Chen, High performance computing for cyber physical social systems by using evolutionary multi-objective optimization algorithm, IEEE Transactions on Emerging Topics in Computing (2018) 1–1doi:10.1109/TETC.2017.2703784.
  • (10)

    X. Cai, Y. Li, Z. Fan, Q. Zhang, An external archive guided multiobjective evolutionary algorithm based on decomposition for combinatorial optimization, IEEE Transactions on Evolutionary Computation 19 (4) (2015) 508–523.

  • (11) K. Deb, R. Datta, A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach, in: IEEE Congress on Evolutionary Computation, IEEE, 2010, pp. 1–8.
  • (12) K. Deb, An efficient constraint handling method for genetic algorithms, Computer Methods in Applied Mechanics and Engineering 186 (2-4) (2000) 311–338.
  • (13) T. P. Runarsson, X. Yao, Stochastic ranking for constrained evolutionary optimization, IEEE Transactions on Evolutionary Computation 4 (3) (2000) 284–294.
  • (14) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with an archive and gradient-based mutation, in: Evolutionary Computation (CEC), 2010 IEEE Congress on, IEEE, 2010, pp. 1–9.
  • (15) Y. Wang, Z. Cai, G. Guo, Y. Zhou, Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 37 (3) (2007) 560–575.
  • (16) A. Homaifar, C. X. Qi, S. H. Lai, Constrained optimization via genetic algorithms, Simulation 62 (4) (1994) 242–253.
  • (17) J. A. Joines, C. R. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GA’s, in: Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence, IEEE, 1994, pp. 579–584.
  • (18) G.-G. Wang, Y. Tan, Improving metaheuristic algorithms with information feedback models, IEEE Transactions on Cybernetics (49) (2017) 542–555.
  • (19) Y. G. Woldesenbet, G. G. Yen, B. G. Tessema, Constraint handling in multiobjective evolutionary optimization, IEEE Transactions on Evolutionary Computation 13 (3) (2009) 514–525.
  • (20) E. Mezura-Montes, J. Velázquez-Reyes, C. C. Coello, Modified differential evolution for constrained optimization, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 25–32.
  • (21) E. Mezura-Montes, A. G. Palomeque-Ortiz, Parameter control in differential evolution for constrained optimization, in: 2009 IEEE Congress on Evolutionary Computation, IEEE, 2009, pp. 1375–1382.
  • (22) Z. Fan, J. Liu, T. Sorensen, P. Wang, Improved differential evolution based on stochastic ranking for robust layout synthesis of MEMS components, IEEE Transactions on Industrial Electronics 56 (4) (2009) 937–948.
  • (23) G. Leguizamón, C. A. C. Coello, A boundary search based ACO algorithm coupled with stochastic ranking, in: 2007 IEEE Congress on Evolutionary Computation, IEEE, 2007, pp. 165–172.
  • (24) H.-L. Liu, F. Gu, Q. Zhang, Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems, IEEE Transactions on Evolutionary Computation 18 (3) (2014) 450–455.
  • (25) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Push and pull search for solving constrained multi-objective optimization problems, Swarm and Evolutionary Computation 44 (2019) 665–679.
  • (26) M. Laumanns, L. Thiele, K. Deb, E. Zitzler, Combining convergence and diversity in evolutionary multiobjective optimization, Evolutionary Computation 10 (3) (2002) 263–282.
  • (27) Z. Yang, X. Cai, Z. Fan, Epsilon constrained method for constrained multiobjective optimization problems: some preliminary results, in: Proceedings of the Companion Publication of the 2014 Annual Conference on Genetic and Evolutionary Computation, ACM, 2014, pp. 1181–1186.
  • (28) M. A. Jan, R. A. Khanum, A study of two penalty-parameterless constraint handling techniques in the framework of MOEA/D, Applied Soft Computing 13 (1) (2013) 128–148.
  • (29) M. Asafuddoula, T. Ray, R. Sarker, K. Alam, An adaptive constraint handling approach embedded MOEA/D, in: 2012 IEEE Congress on Evolutionary Computation, IEEE, 2012, pp. 1–8.
  • (30) R. M. Rizk-Allah, R. A. El-Sehiemy, G.-G. Wang, A novel parallel hurricane optimization algorithm for secure emission/economic load dispatch solution, Applied Soft Computing 63 (2018) 206–222.
  • (31) T. Takahama, S. Sakai, Constrained optimization by the constrained differential evolution with gradient-based mutation and feasible elites, in: 2006 IEEE International Conference on Evolutionary Computation, IEEE, 2006, pp. 1–8.
  • (32) P. A. Bosman, D. Thierens, The balance between proximity and diversity in multiobjective evolutionary algorithms, Evolutionary Computation, IEEE Transactions on 7 (2) (2003) 174–188.
  • (33) E. Zitzler, L. Thiele, Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach, IEEE Transactions on Evolutionary Computation 3 (4) (1999) 257–271.
  • (34) S. Holm, A simple sequentially rejective multiple test procedure, Scandinavian Journal of Statistics 6 (2) (1979) 65–70.
  • (35) B. S. Holland, M. D. Copenhaver, An improved sequentially rejective bonferroni test procedure, Biometrics (1987) 417–423.
  • (36) H. Finner, On a monotonicity problem in step-down multiple test procedures, Publications of the American Statistical Association 88 (423) (1993) 920–923.
  • (37) Y. Hochberg, A sharper bonferroni procedure for multiple tests of significance, Biometrika 75 (4) (1988) 800–802.
  • (38) G. Hommel, A stagewise rejective multiple test procedure based on a modified bonferroni test, Biometrika 75 (2) (1988) 383–386.
  • (39) D. M. Rom, A sequentially rejective test procedure based on a modified bonferroni inequality, Biometrika 77 (3) (1990) 663–665.
  • (40) J. Li, A two-step rejection procedure for testing multiple hypotheses, Journal of Statistical Planning and Inference 138 (6) (2008) 1521–1527.
  • (41) J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation 1 (1) (2011) 3–18.
  • (42) Z. Fan, W. Li, X. Cai, H. Li, C. Wei, Q. Zhang, K. Deb, E. Goodman, Difficulty adjustable and scalable constrained multi-objective test problem toolkit, Evolutionary Computation (2019) accepted.
  • (43) Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, S. Tiwari, Multiobjective optimization test instances for the cec 2009 special session and competition, University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multi-objective optimization algorithms, technical report 264.