I Introduction
Practical optimization problems usually involve simultaneous optimization of multiple and conflicting objectives with many constraints. Without loss of generality, constrained multiobjective optimization problems (CMOPs) can be defined as follows:
minimize  (1)  
subject to  
where is a
dimensional objective vector,
defines th of inequality constraints, defines th of equality constraints. If is greater than three, we usually call it a constrained manyobjective optimization problem (CMaOP).A solution is said to be feasible if it meets and at the same time. For two feasible solutions and , solution is said to dominate if for each and for at least one , denoted as . For a feasible solution , if there is no other feasible solution dominating , is said to be a feasible Paretooptimal solution. The set of all the feasible Paretooptimal solutions is called Pareto Set (). Mapping the into the objective space results in a set of objective vectors, denoted as the Pareto Front (), where .
For CMOPs, more than one objective need to be optimized simultaneously subject to constraints. Generally speaking, CMOPs are much more difficult to solve than their unconstrained counterparts  unconstrained multiobjective optimization problems (MOPs). Constrained multiobjective evolutionary algorithms (CMOEAs) are particularly designed to solve CMOPs, with the capability of balancing the search between the feasible and infeasible regions in the search space [1]. In fact, two basic issues need to be considered carefully when designing a CMOEA. One is to balance the feasible solutions and the infeasible solutions, the other is to balance the convergence and diversity of a CMOEA.
To address the former issue, constraint handling mechanisms need to be carefully designed by researchers. The existing constraint handling methods can be broadly classified into five different types, including feasibility maintenance, use of penalty functions, separation of constraint violation and objective values, multiobjective constraint handling and hybrid methods
[2].The feasibility maintenance methods usually adopt special encoding and decoding techniques to guarantee that a newly generated solution is feasible. The penalty functionbased method is one of the most popular approaches. The overall constraints violation is added to each objective with a predefined penalty factor, which indicates a preference between the constraints and the objectives. The penalty functionbased method includes static penalties [3], dynamic penalties [4], death penalty functions [3], coevolutionary penalty functions [5], adaptive penalty functions [6, 7, 8] and selfadaptive penalty functions [9, 10] etc. In the methods using separation of constraint violation and objective values, the constraint functions and the objective functions are treated separately. Variants of this type include stochastic ranking (SR) [11], constraint dominance principle (CDP) [12], epsilonconstrained methods [13, 14]. In the multiobjective constraint handling method, the constraint functions are transformed to one extra objective function. Representative methods of this type include infeasibility driven evolutionary algorithm (IDEA) [15], COMOGA[16] and Cai and Wang’s Method (CW) [17], etc. The hybrid methods of constraint handling usually adopt several constrainthandling methods. Representative methods include adaptive tradeoff model (ATM) [18] and ensemble of constraint handling methods (ECHM) [19].
To address the second issue, the selection methods need to be designed to balance the performance of convergence and diversity in MOEAs. At present, MOEAs can be generally classified into three categories based on the selection strategies. They are Paretodominance (e.g., NSGAII[20], PAESII[21] and SPEAII[22]), decompositionbased (e.g., MOEA/D [23], MOEA/DDE[24], MOEA/DM2M[25] and EAGMOEA/D[26]) and indicator based methods (e.g., IBEA[27], R2IBEA[28], SMSEMOA[29] and HypE[30]). In the group of Paretodominance based methods, such as NSGAII[20], the set of the first nondominated level solutions is selected to improve the performance of convergence, and the crowding distance is adopted to maintain the performance of diversity. In the decompositionbased methods, the performance of convergence is maintained by minimizing the aggregation functions and the performance of diversity is obtained by setting the weight vectors uniformly. In the indicator based methods, such as HypE[30], the performance of convergence and diversity is achieved by the using the hypervolume metric.
A CMOP includes objectives and constraints. A number of features have already been identified to define the difficulty of objectives, which include:

Geometry of PF (linear, convex, concave, degenerate, disconnected and mixed of them)

Search space (biased, or unbiased)

Unimodal or multimodal objectives

Dimensionality of variable space and objective space
The first one is the geometry of PF. The geometry of PF of a MOP can be linear, convex, concave, degenerate, disconnected and mixed of the them. Representative MOPs reflecting this type of difficulty include ZDT[31], F19 [32] and DTLZ [33]. The second one is the biased or unbiased search space. Representative MOPs in this category include MOP17 [34] and IMB114 [35]. The third one is the modality of objectives. The objectives of a MOP can be either unimodal (DTLZ1 [33]) or multimodal (F8 [32]). Objectives with multimodal have multiple local optimal solutions which increase the likelihood of an algorithm being trapped in local optima. The high dimensionality of variable space and objective space are also critical features to define the difficulty of objectives. LSMOP19 [36] have high dimensionality in the variable space. DTLZ [33] and WFG [37] have high dimensionality in the objective space.
On the other hand, constraint functions in general greatly increase the difficulty of solving CMOPs. However, as far as we know, only several test suites (CTP[38], CF[39]) are designed for CMOPs.
CTP test problems [38] are have the capability of adjusting the difficulty of the constraint functions. They offer two types of difficulties: the difficulty near the Pareto front and the difficulty in the entire search space. The test problem CTP1 gives the difficulty near the PF, because the constraint functions of CTP1 make the search region close to the Pareto front infeasible. Test problems CTP2CTP8 provide an optimizer the difficulty in the entire search space.
CF test problems [39] are also commonly used benchmarks, which provide two types of difficulties. For CF1CF3 and CF8CF10, their PFs are a part of their unconstrained PFs. The rest of CF test problems CF4CF7 have difficulties near their PFs, and many constrained Pareto optimal points lie on some boundaries of the constraints.
Even though CDP [38] and CF [39] offer the abovementioned advantages. They have some limitations:

The number of decision variables in the constraint functions can not be extended.

The difficulty level of each type is not adjustable.

No constraint functions with low ratios of feasible regions in the entire search space are suggested.

The number of objectives is not scalable.
Some other used twoobjective test problems, include BNH [40], TNK [41], SRN [42] and OSY [43] problems, which are not scalable to the number of objectives, and difficult to identify types of difficulties.
In this paper, we propose a general framework to construct difficulty adjustable and objective scalable CMOPs which can overcome the limitations of existing CMOPs. CMOPs constructed by this toolkit can be classified into three major types, which are feasibilityhard, convergencehard and diversityhard CMOPs. Feasibilityhard CMOP is a type of problem that presents difficulty for CMOEAs to find feasible solutions in the search space. CMOPs with feasibilityhardness usually have small portions of feasible regions in the entire search space. In addition, CMOPs with convergencehardness mainly suggest difficulty for CMOEAs to approach the PFs efficiently by setting many obstacles before the PFs, while CMOPs with diversityhardness mainly provide difficulty for CMOEAs to distribute their solutions along the complete PFs. In our work, the three types of difficulty are embedded into the CMOPs through proper construction of constraint functions.
In summary, the contribution of this paper is as follows:

This paper defines three primary types of difficulty for constraints in CMOPs. When designing new constraint handling mechanisms for a CMOEA, one has to investigate the nature of constraints in a CMOP that the CMOEA is aiming to address, including the types and levels of difficulties embedded in the constraints. Therefore, a proper definition on the types of difficulty for constraints in CMOPs is necessary and desirable.

This paper also defines the level of difficulty, regarding each type of difficulty for constraints in the constructed CMOPs, which can be adjusted by users. A difficulty level is uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Combination of the three primary constraint types with different difficulty triplets can lead to construction of a large variety of constraints for CMOPs.

Based on the proposed three primary types of difficulty for constraints, nine difficulty adjustable CMOPs named DASCMOP19 are constructed.
The remainder of this paper is organized as follows. Section II discusses the effects of constraints on PFs. Section III introduces the types and levels of difficulties provided by constraints in CMOPs. Section IV explains the proposed toolkit of construction methods for generating constraints in CMOPs with different types and levels of difficulty. Section V realizes the scalability to the number of objectives in CMOPs using the proposed toolkit. Section VI generates a set of difficulty adjustable CMOPs using the proposed toolkit. In Section VII, the performance of two CMOEAs on DASCMOP19 with different difficulty levels are compared by experimental studies, and Section VIII concludes the paper.
Ii Effects of constraints on PFs
Constraints define the infeasible regions in the search space, leading to different types and levels of difficulty for the resulting CMOPs. Some major effects of the constraints on PFs in CMOPs include the following [44]:

Infeasible regions make the original unconstrained PF partially feasible. This can be further divided into two situations. In the first situation, the PF of the constrained problem consists of a part of its unconstrained PF and a set of solutions on some boundaries of constraints, as illustrated by Fig. 1(a). In the second situation, the PF of the constrained problem is only a part of its unconstrained PF, as illustrated by Fig. 1(b).

Infeasible regions block the way towards the PF, as illustrated by Fig. 1(c).

The complete original PF is covered by infeasible regions and becomes no more feasible. Every constrained Pareto optimal point lies on some constraint boundaries, as illustrated by Fig. 1(d).
(a) (b) (c) (d) (e) 
Iii Difficulty types and levels of CMOPs
Three primary difficulty types have been identified, including convergencehardness, diversityhardness, and feasibilityhardness. A difficulty level for each primary difficulty type can be defined as a parameter ranging from 0 to 1. Three difficulty levels, corresponding to three primary difficulty types respectively, form a triplet that depicts the nature of the difficulty of a CMOP.
Iiia Difficulty 1: Diversityhardness
Generally, the PFs of CMOPs with diversityhardness have many discrete segments, or some parts more difficult to be achieved than the other parts by imposing large infeasible regions near them. As a result, achieving the complete PF is difficult for CMOPs.
IiiB Difficulty 2: Feasibilityhardness
For the feasibilityhard CMOPs, the ratios of feasible regions in the search space are usually very low. It is difficult to generate a feasible solution for a CMOEA on the feasibilityhard CMOPs. Often in the initial stage of a CMOEA, most solutions in the population are infeasible.
IiiC Difficulty 3: Convergencehardness
CMOPs with convergencehardness hinder the convergence of CMOEAs towards the PFs. Usually, CMOEAs encounter more difficulty to approach the PFs. Because infeasible regions block the way of CMOEAs converging to the PFs. In other words, the generational distance (GD) metric [45], which indicates the performance of convergence, is difficult to be minimized in the evolutionary process.
IiiD Difficulty level of each primary difficulty type
A difficulty level of each primary difficulty type can be defined by a parameter in the parameterized constraint function corresponding to the primary difficulty type. Each parameter is normalized from 0 to 1. Three parameters, corresponding to the difficulty level of the three primary difficulty types respectively, form a triplet that exactly defines the nature of difficulty of a CMOP constructed by the three parameterized constraint functions.
If each element of the triplet can only take value of either 0 or 1, then a simple combination of the three primary difficulty types will give rise to seven basic different difficulty types. This is analogous to a simple combination of three primary colors gives rise to seven basic colors. But if we allow the three parameters to take any value between 0 and 1, then we can literally get countless difficulty nature (analogous to countless colors in the color space). A difficulty nature here is then precisely depicted by a triplet .
Basic Difficulty Types  Comment 
T1: Diversityhardness  Distributing the feasible solutions in the complete PF is difficult. 
T2: Feasibilityhardness  Obtaining a feasible solution is difficult. 
T3: Convergencehardness  Approaching a Pareto optimal solution is difficult. 
T4: Diversityhardness and feasibilityhardness  Obtaining a feasible solution and the complete PF is difficult. 
T5: Diversityhardness and convergencehardness  Approaching a Pareto optimal solution and the complete PF is difficult. 
T6: Feasibilityhardness and convergencehardness  Obtaining a feasible solution and approaching a Pareto optimal solution is difficult. 
T7: Diversityhardness, feasibilityhardness and convergencehardness  Obtaining a Pareto optimal solution and the complete PF is difficult. 
Iv construction toolkit
As we know, constructing a CMOP is composed of constructing two major parts  objective functions and constraint functions. Li, et al. [46] suggested a general framework for constructing objective functions. It is stated as follows:
(2) 
where are two subvectors of . The function is called the shape function, and is called the nonnegative distance function. The objective function is the sum of the shape function and the nonnegative distance function . We adopt Li, et al.’s method [46] in this work.
In terms of constructing the constraint functions, three different types of constraint functions are suggested in this paper, corresponding to the proposed three primary types of difficulty of CMOPs. More specifically, TypeI constraint functions provide the difficulty of diversityhardness, TypeII constraint functions introduce the difficulty of feasibilityhardness, and TypeIII constraint functions generate the difficulty of convergencehardness. The detailed definition of the three types of constraint functions are given in detail as follows:
Iva TypeI Constraint Functions: Diversityhardness
TypeI constraint functions are defined to limit the boundary of subvector . More specifically, this type of constraint functions divides the PF of a CMOP into a number of disconnected segments, generating the difficulty of diversityhardness. Here, we use a parameter to represent the level of difficulty. means the constraint functions impose no effects on the CMOP, while means the constraint functions provide their maximum effects.
An example of CMOP with diversityhardness is suggested as follows:
(3) 
where , . As an example , are set here. The parameter indicating the level of difficulty is set to . The number of disconnected segments in the PF is controlled by . Moreover, the value of controls the width of each segment. The width of segments reaches its maximum when . When increases, the width of segments decreases, and the difficulty level increases, so does the parameter of the difficulty level . As a result, if is set to , the PF is shown in Fig. 4(a). If , the PF is shown in Fig.4 (b). It can be observed that the width of segments of the PF is reduced as keeps increasing. If , the width of segments shrinks to zero, which provides the maximum level of difficulty to the CMOP. The PF of a threeobjective CMOP with TypeI constraint functions is also shown in Fig. 4(d), with the difficult level . It can be seen that TypeI constraint functions can be applied in more than twoobjective CMOPs, which means that a CMOP with the scalability to the number of objectives can be constructed using this type of constraints.
(a) (b) (c) (d) 
IvB TypeII Constraint Functions: Feasibilityhardness
TypeII constraint functions are set to limit the reachable boundary of the distance function of , and thereby control the ratio of feasible regions. As a result, TypeII constraint functions generate the difficulty of feasibilityhardness. Here, we use a parameter to represent the level of difficulty, which ranges from 0 to 1. means the constraints are the weakest, and means the constraint functions are the strongest.
For example, a CMOP with TypeII constraint functions can be defined as follows:
(4) 
where equals to , and , and . The distance between the constrained PF and unconstrained PF is controlled by , and in this example. The ratio of feasible regions is controlled by . If , , the feasible area reaches maximum as shown in Fig. 5(a). If , , the feasible area is decreased as shown in Fig. 5(b). If , , the feasible area in the objective space is very small. The PF of this problem is shown in Fig. 5(c). TypeII constraints can be also applied to CMOPs with three objectives as shown in Fig. 5(d).
(a) (b) (c) (d) 
IvC TypeIII Constraint Functions: Convergencehardness
TypeIII constraint functions limit the reachable boundary of objectives. As a result, infeasible regions act like ’blocking’ hindrance for searching populations of CMOEAs to approach the PF. As a result, TypeIII constraint functions generate the difficulty of convergencehardness. Here, we use a parameter to represent the level of difficulty, which ranges from 0 to 1. means the constraints are the weakest, means the constraints are the strongest, and the difficulty level increases as increases.
For example, a CMOP with TypeIII constraint functions can be defined as follows:
(5) 
where the level of difficulty parameter is defined as . If , the PF is shown in Fig. 6(a). If , the infeasible regions are increased and shown in Fig. 6(b). If , the infeasible regions become bigger than those of as shown in Fig. 6(c). The constraints of TypeIII can be also applied to CMOPs with three objectives as shown in Fig. 6(d).
TypeIII constraint functions can be expressed in a matrix form, which can be defined as follows:
(6) 
where . is a translation vector. is a transformational matrix, which control the degree of rotation and stretching of the vector . According to the TypeIII constraint functions in Eq. (5), , and can be expressed as follows:
It is worthwhile to point out that by using this approach we can further extend the number of objectives to be more than three, even though more sophisticated visualization approach is needed to show the resulting CMOPs in the objective space.
(a) (b) (c) (d) 
To summarize, the three types of constraint functions discussed above correspond to the three primary difficulty types of CMOPs respectively. In particular, TypeI constraint function corresponds to diversityhardness, TypeII corresponds to feasibilityhardness, and TypeIII corresponds to convergencehardness. The level of each primary difficulty type can be decided by a parameter. In this work, three parameters are defined in a triplet , which specifies the difficulty level of a particular difficulty type. It is noteworthy to point out that this approach of constructing toolkit for CMOPs can also be scaled to generate CMOPs with more than three objective functions. The scalability to the number of objectives is discussed in more detail in Section V.
V Scalability to the number of objectives
Recently manyobjective optimization attracts a lot of research interests, which makes the feature of scalability to the number of objectives of CMOPs desirable. A general framework to construct CMOPs with the scalability to the number of objectives is given in Eq. (7).
In Eq. (7), we borrow the idea of WFG toolkit [37] to construct objectives, which can be scaled to any number of objectives. More specifically, the number of objectives is controlled by a userdefined parameter .
Three different types of constraint functions proposed in Section V can be combined together with the scalable objectives to construct difficulty adjustable and scalable CMOPs (DASCMOPs). More specifically, the first constraint functions with TypeI are defined to limit the reachable boundary of each decision variable in the shape functions ( to ), which have the ability to control the difficulty level of diversityhardness by . The to constraint functions belong to TypeII, which limit the reachable boundary of the distance functions ( to ). They have the ability to control the the difficulty level of feasibilityhardness by . The last constraint functions are set directly on each objective, and belong to TypeIII. They generate a number of infeasible regions, which hinder the working population of a CMOEA approaching to the PF. The difficulty level of convergencehardness generated by TypeIII constraint functions is controlled by . The rest of parameters in Eq. (7) are illustrated as follows.
Three parameters , and are used to control the number of each type of constraint functions, respectively. and . The total number of constraint functions is controlled by . decides the dimensions of decision variables, and . decides the number of disconnected segments in the PF. indicates the distance between the constrained PF and the unconstrained PF. The difficulty level of a DASCMOP is controlled by a difficulty triplet , with each of its component ranging from 0 to 1. When each of parameter in the difficult triplet increases, the difficulty level of a DASCMOP increases.
It is worth noting that the number of objectives of DASCMOPs can be easily scaled by tuning the parameter of . The difficulty level of DASCMOPs can be also easily adjusted by assigning a difficulty triplet with three parameters ranging from 0 to 1.
(7) 
Vi A set of difficulty adjustable and scalable CMOPs
In this section, as an example, a set of nine difficulty adjustable and scalable CMOPs (DASCMOP19) is suggested through the proposed toolkit.
As mentioned in Section IV, constructing a CMOP composes of constructing objective functions and constraint functions. According to Eq. (7), we suggest nine multiobjective functions, including convex, concave and discrete PF shapes, to construct CMOPs. A set of difficulty adjustable constraint functions is generated by Eq. (7). Nine difficulty adjustable and scalable CMOPs named DASCMOP19 are generated by combining the suggested objective functions and the generated constraint functions. The detailed definitions of DASCMOP19 are shown in Table II.
In Table II, DASCMOP13 have the same constraint functions. For DASCMOP46, they also have the same constraint functions. The difference between DASCMOP13 and DASCMOP46 is that they have different distance functions. For DASCMOP16, they have two objectives. The number of objectives in Eq. (7) are able to scale to more than two. For example, DASCMOP79 have three objectives. The constraint functions of DASCMOP8 and DASCMOP9 are the same as those of DASCMOP7.
It is worth noting that the value of difficulty triplet elements can be set by users. If we want to reduce/increase the difficulty levels of DASCMOP19, we only need to adjust the parameters of the triplet elements to smaller/larger values, and generate a new set of test instances.
Problem  Objectives  Constraints 
DASCMOP1  
DASCMOP2  It is the same as that of DASCMOP1  
DASCMOP3  It is the same as that of DASCMOP1  
DASCMOP4  
DASCMOP5  It is the same as that of DASCMOP4  
DASCMOP6  It is the same as that of DASCMOP4  
DASCMOP7  
DASCMOP8  It is the same as that of DASCMOP7  
DASCMOP9  It is the same as that of DASCMOP7 
Vii Experimental Study
Viia Experimental Settings
To test the performance of CMOEAs on the DASCMOPs, two commonly used CMOEAs (i.e., MOEA/DCDP and NSGAIICDP) are tested on DASCMOP19 with sixteen different difficulty triplets in the experiment. As descripted in Section IV, three parameters are defined in a triplet , which specifies the difficulty level of a particular difficulty type. More specifically, represents the difficulty level of diversityhardness, denotes the difficulty level of feasibilityhardness, and indicates the difficulty level of convergencehardness. The difficulty triplets for each DASCMOP are listed in Table III.
Difficulty Triplets  
(0.0,0.0,0.0)  (0.0,0.5,0.0)  (0.0,0.0,0.75)  (1.0.0,0.0,0.0) 
(0.0,0.25,0.0)  (0.0,0.0,0.5)  (0.75,0.0,0.0)  (0.25,0.25,0.25) 
(0.0,0.0,0.25)  (0.5,0.0,0.0)  (0.0,1.0,0.0)  (0.5,0.5,0.5) 
(0.25,0.0,0.0)  (0.0,0.75,0.0)  (0.0,0.0,1.0)  (0.75,0.75,0.75) 
The detailed parameters of the algorithms are summarized as follows.

Setting for reproduction operators: The mutation probability
( is the number of decision variables). For the polynomial mutation operator, the distribution index is set to 20. For the simulated binary crossover (SBX) operator, the distribution index is set to 20. The rate of crossover . 
Population size: For DASCMOP16, , and for DASCMOP79, .

Number of runs and stopping condition: Each algorithm runs 30 times independently on each test problem with sixteen different difficulty triplets. The maximum function evaluations is 100000 for DASCMOP16, 200000 for DASCMOP79.

Neighborhood size: for DASCMOP16, for DASCMOP79.

Probability use to select in the neighborhood: .

The maximal number of solutions replaced by a child: .
ViiB Performance Metric
To measure the performance of MOEA/DCDP and NSGAIICDP on DASCMOP19 with different difficulty triplets, the inverted generation distance ()[47] is adopted. The detailed definition of is given as follows:

Inverted Generational Distance ():
The metric simultaneously reflects the performance of convergence and diversity, and it is defined as follows:
(8) 
where is the ideal PF set, is an approximate PF set achieved by an algorithm. represents the number of objectives. It is worth noting that the smaller value of represents the better performance of both diversity and convergence.
ViiC Performance Comparisons on twoobjective DASCMOPs
Table IV presents the statistic results of values for MOEA/DCDP and NSGAIICDP on DASCMOP13. We can observe that for DASCMOP1 with difficulty triplets , and , NSGAIICDP is significantly better than MOEA/DCDP, which indicates that NSGAIICDP is more suitable for solving DASCMOP1 with feasibilityhardness. For DASCMOP1 with difficulty triplets , and , MOEA/DCDP is significantly better than NSGAIICDP, which indicates that MOEA/DCDP is more suitable for solving DASCMOP1 with convergencehardness. For DASCMOP1 with simultaneous convergence, feasibility and convergencehardness, i.e., the difficulty triplets are , and , NSGAIICDP is significantly better than MOEA/DCDP.
The final populations with the best values in 30 independent runs by using MOEA/DCDP and NSGAIICDP on DASCMOP1 with difficulty triplets , , , and are plotted in Fig. 7. We can observe that each type of constraint functions in DASCMOP1 indeed generates corresponding difficulties for MOEA/DCDP and NSGAIICDP. With the increasing of each elements in the difficulty triplet, the problem is more difficult to solve, as illustrated by Fig. 7(d)(e) and Fig. 7(i)(j).
For DASCMOP2 with feasibilityhardness, for example, the difficulty triplets are , and , NSGAIICDP is significantly better than MOEA/DCDP. For DASCMOP2 with difficulty triplets and , MOEA/DCDP is significantly better than NSGAIICDP, which indicates that MOEA/DCDP is more suitable for solving DASCMOP2 with convergencehardness. For DASCMOP2 with the difficulty triplet , MOEA/DCDP is also significantly better than NSGAIICDP. For DASCMOP2 with simultaneous convergence, feasibility and convergencehardness, NSGAIICDP is significantly better than MOEA/DCDP.
Fig. 8 shows the final populations with the best values in 30 independent runs by using MOEA/DCDP and NSGAIICDP on DASCMOP2 with difficulty triplets , , , and . Both MOEA/DCDP and NSGAIICDP can only achieve a few parts of the PFs. With the increasing of each element in the difficulty triplet, it is more difficult for MOEA/DCDP and NSGAIICDP to find the whole PFs of DASCMOP2.
For DASCMOP3 with the difficulty triplet , that is, there are no constraints in DASCMOP3, NSGAIICDP is significantly better than MOEA/DCDP. For DASCMOP3 with difficulty triplets and , MOEA/DCDP is significantly better than NSGAIICDP, which indicates that MOEA/DCDP is more suitable for solving DASCMOP3 with larger difficulty levels in terms of convergencehardness. For DASCMOP3 with the difficulty triplet , MOEA/DCDP performs better than NSGAIICDP. For DASCMOP3 with feasibilityhardness, for example, the difficulty triplets are , and , NSGAIICDP is significantly better than MOEA/DCDP. For DASCMOP3 with simultaneous convergence, feasibility and convergencehardness, NSGAIICDP is significantly better than MOEA/DCDP.
Fig. 9 shows the final populations with the best values in 30 independent runs by using MOEA/DCDP and NSGAIICDP on DASCMOP3 with difficulty triplets , , , and . For DASCMOP3 with diversity or feasibility or convergencehardness, MOEA/DCDP and NSGAIICDP can not find the whole PFs. With the increasing of difficulty triplets, DASCMOP3 is becoming more difficult for MOEA/DCDP and NSGAIICDP to solve, as illustrated by Fig. 9(d)(e) and Fig. 9(i)(j).
The statistic results of values for MOEA/DCDP and NSGAIICDP on DASCMOP47 are shown in Table V. From this Table, we can observe that MOEA/DCDP is significantly better than MOEA/DCDP on DASCMOP4 with the difficulty triplet . In other words, MOEA/DCDP works better than NSGAIICDP on DASCMOP4 without any constraints. For DASCMOP4 with feasibilityhard difficulty triplets , and , NSGAIICDP is significantly better than MOEA/DCDP.
For DASCMOP4 with diversityhard difficulty triplets , , and , MOEA/DCDP performs significantly better than NSGAIICDP. For DASCMOP4 with convergencehard difficulty triplets , and , MOEA/DCDP also performs significantly better than NSGAIICDP.
For DASCMOP5 with feasibilityhard difficulty triplets , , and , NSGAIICDP performs significantly better than MOEA/DCDP. For DASCMOP5 with diversity or convergencehardness, MOEA/DCDP is significantly better than NSGAIICDP.
For DASCMOP6 with the difficulty triplet , NSGAIICDP is significantly better than MOEA/DCDP. For DASCMOP6 with difficulty triplets , , and , MOEA/DCDP is significantly better than NSGAIICDP. For DASCMOP5 with the rest of difficulty triplets, MOEA/DCDP and NSGAIICDP have not any significantly difference.
ViiD Performance Comparisons on threeobjective DASCMOPs
The statistic results of values for MOEA/DCDP and NSGAIICDP on DASCMOP79 are presented in Table IV. For DASCMOP7 without any constraints, that is, the difficulty triplet is , NSGAIICDP is significantly better than MOEA/DCDP. For DASCMOP7 with diversity or feasibilityhardness, for example, the difficulty triplets are , , , and , NSGAIICDP is also significantly better than MOEA/DCDP. For DASCMOP7 with difficulty triplets and , MOEA/DCDP is significantly better than NSGAIICDP. For DASCMOP7 with simultaneous diversity, feasibility and convergencehardness, NSGAIICDP performs significantly better than NSGAIICDP.
For DASCMOP8 with convergencehardness, for example, the difficulty triplets are , , and , MOEA/DCDP performs significantly better than NSGAIICDP. For DASCMOP8 with the difficulty triplet , MOEA/DCDP is also significantly better than NSGAIICDP. For DASCMOP8 with the rest of difficulty triplets, NSGAIICDP is better or significantly better than MOEA/DCDP.
For DASCMOP9 with feasibilityhardness, i.e, the difficulty triplets are , and , NSGAIICDP performs significantly better than MOEA/DCDP. For DASCMOP9 with convergence or diversityhardness, i.e., the difficulty triplets are , , , , , MOEA/DCDP is significantly better than NSGAIICDP. For DASCMOP9 with simultaneous diversity, feasibility and convergencehardness, NSGAIICDP performs significantly better than MOEA/DCDP. The final populations with the best values in 30 independent runs by using MOEA/DCDP and NSGAIICDP on DASCMOP9 with difficulty triplets , , , and are plotted in Fig. 10. We can observe that both MOEA/DCDP and NSGAIICDP only achieve a few parts of PFs of DASCMOP9.
ViiE Analysis of Experimental Results
From the above performance comparisons on the nine test instances DASCMOPs, it is clear that each type of constraint functions generates corresponding difficulties for MOEA/DCDP and NSGAIICDP. With the increasing of each elements in the difficulty triplet, the problem is becoming more difficult for MOEA/DCDP and NSGAIICDP to solve. Furthermore, it can be concluded that NSGAIICDP performs better than MOEA/DCDP on DASCMOPs with feasibilityhardness. MOEA/DCDP performs better than NSGAIICDP on DASCMOPs with diversity or convergencehardness. In the case of DASCMOPs with simultaneous diversity, feasibility and convergencehardness, NSGAIICDP performs better than MOEA/DCDP on most of test instances.
Instance  DASCMOP1  DASCMOP2  DASCMOP3  
Difficulty Triplet  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP 
(0.0,0.0,0.0)  1.367E01  1.494E01  1.678E01  1.720E01  1.909E01  1.536E01 
2.934E02  3.723E02  2.675E02  3.710E02  4.315E02  2.340E02  
(0.0,0.25,0.0)  1.375E01  9.878E02  1.203E01  7.994E02  1.744E01  1.224E01 
2.591E02  1.185E02  2.097E02  1.316E02  4.493E02  2.374E02  
(0.0,0.0,0.25)  1.351E01  2.006E01  1.709E01  1.606E01  2.076E01  1.888E01 
3.515E02  2.100E02  3.485E02  4.088E02  4.827E02  3.296E02  
(0.25,0.0,0.0)  1.497E01  1.524E01  1.719E01  1.796E01  2.300E01  2.146E01 
2.609E02  4.081E02  3.073E02  4.273E02  5.525E02  5.225E02  
(0.0,0.5,0.0)  1.578E01  1.174E01  1.359E01  9.157E02  1.847E01  1.292E01 
1.516E02  1.304E02  2.820E02  1.670E02  3.752E02  2.305E02  
(0.0,0.0,0.5)  1.809E01  2.362E01  1.580E01  2.164E01  2.127E01  2.095E01 
3.005E02  2.125E02  4.670E02  4.156E02  4.019E02  1.883E02  
(0.5,0.0,0.0)  1.602E01  1.605E01  1.796E01  1.799E01  4.185E01  3.263E01 
4.631E02  3.716E02  3.874E02  3.604E02  1.068E01  1.168E01  
(0.0,0.75,0.0)  1.858E01  1.420E01  1.548E01  1.073E01  2.320E01  1.527E01 
3.527E02  1.546E02  2.665E02  1.557E02  4.604E02  2.729E02  
(0.0,0.0,0.75)  1.769E01  3.147E01  1.554E01  3.354E01  2.201E01  2.656E01 
3.770E02  5.324E02  2.601E02  1.110E01  2.596E02  5.844E02  
(0.75,0.0,0.0)  1.658E01  1.510E01  2.206E01  1.712E01  2.258E01  1.956E01 
4.960E02  4.018E02  4.483E02  4.269E02  5.725E02  5.785E02  
(0.0,1.0,0.0)  3.682E01  3.636E01  3.250E01  3.235E01  4.353E01  4.389E01 
1.230E02  6.217E03  7.175E03  4.915E03  4.106E02  2.952E02  
(0.0,0.0,1.0)  6.998E01  4.625E01  7.388E01  7.177E01  6.590E01  6.615E01 
3.767E01  2.755E02  1.901E01  1.254E01  2.879E04  1.982E03  
(1.0.0,0.0,0.0)  4.515E01  1.531E+00  4.102E01  1.413E+00  4.331E01  1.842E+00 
1.169E01  9.908E01  1.029E01  8.854E01  9.567E02  1.431E+00  
(0.25,0.25,0.25)  2.674E01  2.119E01  1.404E01  9.860E02  2.589E01  1.686E01 
3.893E02  4.025E02  2.999E02  1.632E02  3.644E02  4.206E02  
(0.5,0.5,0.5)  4.160E01  3.624E01  1.656E01  1.123E01  4.680E01  4.027E01 
7.441E02  4.702E02  4.357E02  1.638E02  5.516E02  3.817E02  
(0.75,0.75,0.75)  7.750E01  7.079E01  1.867E01  1.029E01  7.473E01  2.808E01 
5.452E02  9.615E02  3.758E02  1.201E02  1.864E01  1.242E01 
Mean and standard deviation of
values obtained by MOEA/DCDP and NSGAIICDP on DASCMOP13. Wilcoxon’s rank sum test at 0.05 significance level is performed between MOEA/DCDP and NSGAIICDP. and denote that the performance of NSGAIICDP is significantly worse than or better than that of MOEA/DCDP, respectively.Instance  DASCMOP4  DASCMOP5  DASCMOP6  
Difficulty Triplet  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP 
(0.0,0.0,0.0)  1.121E02  5.282E02  1.464E02  5.356E02  6.304E02  6.778E02 
8.906E03  2.977E02  1.365E02  2.640E02  7.337E02  3.402E02  
(0.0,0.25,0.0)  2.913E03  2.412E03  3.034E03  2.269E03  6.410E02  5.489E02 
1.290E03  1.460E04  1.265E03  7.276E05  6.666E02  5.249E02  
(0.0,0.0,0.25)  7.880E02  3.380E01  4.158E02  4.192E01  1.202E01  3.907E01 
4.741E02  5.514E02  2.298E02  1.665E01  5.496E02  2.475E01  
(0.25,0.0,0.0)  1.849E02  6.475E02  2.010E02  6.510E02  9.321E02  1.291E01 
1.324E02  3.587E02  1.235E02  4.145E02  7.435E02  6.290E02  
(0.0,0.5,0.0)  2.915E03  2.345E03  2.940E03  2.390E03  7.189E02  5.139E02 
1.108E03  7.926E05  1.187E03  6.758E04  4.962E02  5.530E02  
(0.0,0.0,0.5)  2.380E01  9.456E01  1.010E01  8.761E01  4.169E01  9.448E01 
8.551E02  3.330E01  7.600E02  2.850E01  3.895E01  3.083E01  
(0.5,0.0,0.0)  2.321E02  5.856E02  1.822E02  5.840E02  9.403E02  1.369E01 
1.611E02  3.015E02  9.361E03  2.992E02  5.364E02  5.124E02  
(0.0,0.75,0.0)  2.668E03  3.105E03  2.587E03  3.078E03  6.957E02  6.104E02 
7.843E04  3.840E03  1.237E03  4.450E03  7.664E02  5.444E02  
(0.0,0.0,0.75)  4.260E01  1.266E+00  5.002E01  1.223E+00  1.260E+00  1.258E+00 
1.443E01  1.709E01  1.142E01  2.086E01  3.391E01  2.116E01  
(0.75,0.0,0.0)  1.690E02  5.761E02  1.992E02  7.616E02  1.446E01  1.268E01 
1.301E02  2.894E02  1.608E02  3.299E02  1.056E01  6.071E02  
(0.0,1.0,0.0)  5.229E02  1.019E01  1.178E02  1.114E01  1.816E01  1.616E01 
1.307E01  1.272E01  3.037E02  1.017E01  1.024E01  1.222E01  
(0.0,0.0,1.0)  1.824E+00  1.633E+00  1.946E+00  1.612E+00  1.983E+00  1.656E+00 
9.650E02  2.638E01  3.058E01  2.741E01  1.490E01  2.554E01  
(1.0.0,0.0,0.0)  4.795E01  7.564E+00  4.465E01  7.917E+00  5.126E01  8.314E+00 
1.055E01  4.254E+00  9.627E02  7.613E+00  9.984E02  6.611E+00  
(0.25,0.25,0.25)  3.392E03  5.863E02  2.754E03  4.428E02  1.719E01  1.418E01 
1.941E03  1.292E01  6.112E04  1.206E01  1.534E01  1.734E01  
(0.5,0.5,0.5)  1.219E02  3.658E01  5.861E01  2.919E01  7.863E01  5.457E01 
3.938E02  1.246E01  7.668E01  4.291E01  5.005E01  3.147E01  
(0.75,0.75,0.75)  2.341E01  2.449E01  1.319E+00  4.312E01  1.014E+00  9.106E01 
1.670E03  5.065E02  4.447E01  5.137E01  3.383E01  4.101E01 
Instance  DASCMOP7  DASCMOP8  DASCMOP9  
Difficulty Triplet  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP  MOEA/DCDP  NSGAIICDP 
(0.0,0.0,0.0)  6.052E02  5.150E02  6.679E02  6.679E02  2.737E01  3.158E01 
1.018E03  4.413E03  5.616E04  5.958E03  2.447E01  1.784E01  
(0.0,0.25,0.0)  5.154E02  4.946E02  6.759E02  6.453E02  3.499E01  2.084E01 
1.787E03  2.179E03  2.889E03  1.798E03  1.447E01  1.231E01  
(0.0,0.0,0.25)  5.048E02  5.288E02  6.399E02  7.098E02  9.470E02  3.315E01 
9.856E04  5.451E03  1.117E03  3.921E03  6.021E02  1.805E01  
(0.25,0.0,0.0)  5.852E02  4.869E02  6.756E02  6.326E02  3.127E01  3.458E01 
7.656E04  4.057E03  1.122E03  3.801E03  2.813E01  1.977E01  
(0.0,0.5,0.0)  5.178E02  4.891E02  6.718E02  6.455E02  3.552E01  2.347E01 
3.099E03  1.839E03  2.691E03  2.882E03  1.265E01  1.042E01  
(0.0,0.0,0.5)  4.583E02  6.307E02  6.502E02  7.726E02  8.891E02  3.847E01 
2.608E04  1.707E02  1.267E03  7.831E03  2.556E02  1.532E01  
(0.5,0.0,0.0)  5.546E02  4.548E02  6.663E02  5.746E02  1.875E01  4.130E01 
1.450E03  4.871E03  1.498E03  4.938E03 
Comments
There are no comments yet.