Multiobjective Test Problems with Degenerate Pareto Fronts

06/07/2018 ∙ by Liangli Zhen, et al. ∙ 0

In multiobjective optimization, a set of scalable test problems with a variety of features allows researchers to investigate and evaluate abilities of different optimization algorithms, and thus can help them to design and develop more effective and efficient approaches. Existing, commonly-used test problem suites are mainly focused on the situations where all the objectives are conflicting with each other. However, in some many-objective optimization problems, there may be unexpected characteristics among objectives, e.g., redundancy. This leads to a degenerate problem. In this paper, we systematically study degenerate problems. We abstract three generic characteristics of degenerate problems, and on the basis of these characteristics we present a set of test problems, in order to support the investigation of multiobjective search algorithms on problems with redundant objectives. To assess the proposed test problems, ten representative multiobjective evolutionary algorithms are tested. The results indicate that none of the tested algorithms is able to effectively solve these proposed problems, calling for the need of developing new approaches to addressing degenerate multi-objective problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In multiobjective optimization, researchers generally assume that the objectives are conflicting to each other. In practical, however, this may not always be true. For example, when dealing with a dynamic optimization scenario, an engineer may not look carefully into the connection of the existing objectives, but rather add new objectives to accommodate new requirements. This may lead to some added objectives harmonious with the existing objectives (or their combinations). Such problems are called degenerate problems [1]. The degenerate problems widely exist in real world, such as multi-speed gearbox design [2], storm-drainage system planning [3], car structure design [4, 5], and optimal product selection in software engineering [6].

Degenerate problems appear relatively rare in the evolutionary multiobjective optimization research. And more importantly, they are designed to serve some particular purpose or in accordance with specific landscape patterns, thus failing to represent the variety of real-world scenarios. For example, Deb-Thiele-Laumanns-Zitzler (DTLZ) 5 [7], DTLZ6 [7], and Walking fish group (WFG) 3 [8, 9] are three degenerate test problems whose PFs lie on 1D curves independent of the number of objectives, and all of the objectives except the last one objective are multiples of the first objective on their PFs. The DTLZ5 problem is further extended into DTLZ5(,[10], an -objective problem with specifiable essential objectives. To help the researchers to easily view and understand the search behavior of multiobjective optimizers, the multi-point distance minimization problem (MP-DMP) [11, 12, 13] and the multi-line distance minimization problem (ML-DMP) [14, 15] are developed. MP-DMP is to simultaneously minimize the distances of a point to a prespecified set of target points, and ML-DMP aims to simultaneously minimize the distances of a point to a set of target lines. Since the Pareto-optimal regions of these two problems in the decision space typically lie on a D manifold (regardless of the number of the objectives and the decision variables) [15], their PFs are also on a D manifold, and this makes them be degenerate problems. In [16], a set of problems whose PFs lie on D or D manifold are presented to stress the complexity of the Pareto optimal solutions in the decision space. In contrast, in [17] to emphasize the effectiveness of the Pareto optimal solutions in the objective space, another set of problems are proposed, where the redundant objectives are all equal to zero and the degenerate PF is determined only by the first part of the objectives.

On the other hand, objective reduction techniques have been receiving increasing attention in the evolutionary multiobjective optimization area [18, 19, 20, 21, 22, 13, 23]. However, the lack of a set of comprehensive degenerate problems may limit systematic investigations of their performance. Algorithms which perform well on existing degenerate test problems with particular properties (e.g., DTLZ5(, )) may not be able to work in real-life degenerate cases, where the correlation between objectives can be of high complexity.

In this paper, we propose a set of degenerate test problems aiming to reflect the generality of degenerate problems. The main contribution of this paper is twofold: 1) by analyzing the relation among the objectives of problems, we capture three characteristics of degenerate problems; 2) based on a uniform formulation and the captured characteristics, five test problems have been proposed. These problems contain a variety of representative characteristics and features, which enable researchers to investigate working mechanisms of different MOEAs on degenerate problems, particularly the objective reduction-based algorithms.

The remainder of this paper is organized as follows. Section II and Section III are devoted to the design principles and the description of three characteristics of test problems with degenerate PFs, respectively. Based on the principles and the analysis in the previous two sections, five test problems are proposed in Section IV. Section V Section V presents empirical results of 10 state-of-the-art MOEAs on the proposed test problems and also the impact of the presented three characteristics for existing objective-reduction techniques. Finally, Section VI concludes the paper.

Ii Design Principles

In order to extend and generalize the test problems easily, we follow four basic principles to design the test problems as suggested in [7], [9] and [24].

  • The test problems can be constructed with a uniform formulation.

  • The test problems should be scalable to the number of the decision variables.

  • The test problems should be scalable to the number of the objectives.

  • The resulting PF of the problem should be exactly known, and the corresponding decision variable values should also be easy to find.

In this paper, we use the following uniform formulation for all the proposed test problems:

(1)

with

(2)

where

is the first part of decision vector and

is the other part, the functions are used to define the shape of Pareto front, known as the shape functions, define the fitness landscape, known as the landscape functions [24], are essential objective functions, are problem objectives, and are transforming functions which define the relation between the problem objectives and the essential objectives.

Denoting , , , , and , we can rewrite the formulation as

(3)

with

(4)

where the symbol denotes a entry-wise product operation, and is a vector of ones. Please note that (4) defines essential objective functions, and (3) transforms these essential objectives into another high-dimensional space and obtains the final objective functions for the test problems.

For a given test problem formulated by (4) and (3), the goal of a multiobjective optimizer is to find the Pareto decision vectors such that and . Following this manner, we can use the designed to evaluate the ability of an algorithm to converge to the PF, and use the designed to test an algorithm’s ability to obtain diverse solutions.

Since this paper aims to present some important characteristics of the problems with degenerate PFs, we focus on the design of the transformations in (3), which controls the relationships between the final objectives and the essential objectives . For the definition of the essential objectives in (4), we simply select/design based on the existing definitions in [7] and [9]. In the next section, we will present the detailed characteristics of the proposed test problems with respect to the design of in (3).

Iii Problem Characteristics

As we mentioned before, the correlation between the objectives is not systematically considered in the existing test problems. In this section, we present the following three characteristics of the degenerate problems.

Iii-a Explicitly Redundant Objectives

In many-objective optimization, there exist some problems that explicitly have redundant objectives, i.e., the existence of these objectives do not have any impact on the solutions for the optimization problem. With regard to this case, we have the following theorem (for ease of explanation, assume that there are two essential objectives):

Theorem 1.

Supposing that there is an optimization problem with two conflicting objectives and , we further add another objective , where is a non-decreasing function with respective to and , then the new problem with the three objectives has the same Pareto solution set as the original two-objective problem.

Proof:

See Appendix A. ∎

Based on Theorem 1, we can define test problems with arbitrary number of redundant objectives. If an algorithm can find the essential objectives of a problem, it can ignore all redundant objectives and potentially obtain the a good set of solutions for the original problem. Here, we give a general formulation of this type of problems. Let be conflicting objectives, then we add redundant objectives to the problem as:

(5)

where are non-decreasing functions with respect to their corresponding inputs. From the definition of DTLZ5(, ) in [10], we can see that it is a special case of this test problem.

Iii-B Implicitly Redundant Objectives

Fig. 1: The reference points sampled from the PF of a -objective problem with two essential objectives defined in (6). (a) The PF in the D objective space. (b) Projection of the PF on the subspace of versus . (c) Projection of the PF on the subspace of versus . (d) Projection of the PF on the subspace of versus .

In the real-world applications, the number of essential objectives may be smaller than the number of the objectives of the underlying problem, while the essential objective set is not a subset of the objective set of the problem. For this case, let us first consider a -objective minimization problem:

(6)

with

(7)

where is the number of decision variables, and . Fig. 1(a) shows the PF of this problem, from which we can see that the PF lies in a D manifold since the problem has only two essential objectives and . Generally, this problem cannot be well solved with the objective selection-based reduction methods. The projections of the original objectives on subspaces , , and are plotted in Fig. 1(b)-(d), respectively. From the plots, we can see that only part of the Pareto optimal solution set (PS) of the original -objective problem and the PSs of the reduced -objective MOPs are overlapped. Nevertheless, the PS of this problem can be obtained by optimizing the problem with the objective two essential objectives and .

Following the manner in the above example, we assume the essential (conflicting) objectives are , and then we construct a minimization problem with objectives:

(8)

where are increasing functions, and . With regard to this case, we have the following theorem.

Theorem 2.

Suppose there exists an optimization problem with conflicting objectives , we construct another problem via (8), then the new problem with the objectives has the same PS as the problem with the objectives .

Proof:

See Appendix B. ∎

Since the essential objectives are not all included in the objective set of the problem, this type of problems cannot be well solved with objective selection-based MOEAs. This type of problems is proposed to test the ability of algorithms to extract essential objectives.

Iii-C Partially Redundant Objectives

The correlation between objectives may differ in different regions of the objective space. Two objectives may be harmonious on some parts of the PF, while they are unrelated/conflicting on some other parts of the PF. Let us consider a -objective minimization problem:

(9)

with

(10)

where is the number of decision variables, and . Fig. 2 shows the PF of this problem, from which we can see that and are conflicting at , while they are equal at , and it makes the problem has a degenerate PF at this region, i.e., it leads to a partially degenerate test problem.

Fig. 2: The reference points sampled from the PF of a -objective problem with partially degenerate PF defined in (9). (a) The PF in the D objective space. (b) Projection of the PF on the subspace of versus .

This type of test problems is proposed to test the ability of algorithms to discover degenerate segments of PF in the objective space.

Iv Problem Instances

Based on the basic principles and the above three characteristics, we present here a representative set of test problems with degenerate PFs, called as DPF111The MATLAB code of the proposed test problems is available at http://machineilab.org/users/zhenliangli/code/dpf.zip.. The essential objectives are carefully selected/designed with diverse properties which cover a good representation of various real-world scenarios, such as being multimodal, disconnected, partially separable, biased, and having different shapes of PFs. The characteristics and features of these five test problems are summarized in Tab. I. More test problems can also be constructed by designing different essential objective functions and transforming functions.

Problem Redundant Correlation PF shape Other features
DPF1 Explicitly Linear Linear Multimodal
DPF2 Explicitly Nonlinear Mixed Disconnected,
DPF3 Implicitly Linear Concave Bias
DPF4 Implicitly Nonlinear Concave Multimodal
DPF5 Partially Linear Convex Partially separable
TABLE I: Characteristics and features of the proposed test problems. The correlation denotes the relation between the problem objectives and the essential objectives.

Iv-a Test Problem DPF1

In the first test problem, we construct an -objective minimization problem with essential objectives. The explicitly redundant objectives are linearly correlated with the essential objectives. The objective functions of DPF1 are defined as

(11)

with

(12)

where is the number of decision variables, (typically set as ) denotes the number of elements in , and are column vectors with non-negative elements.

The essential objectives simply refer to that in DTLZ1 [7]. The Pareto optimal solution corresponds to and the objective vectors lie on the linear hyper-plane: . The difficulty of this problem is to select the essential objectives and converge to the hyper-plane.

Iv-B Test Problem DPF2

The second constructed -objective minimization problem has essential conflicting objectives as well. While the explicitly redundant objectives are non-linearly correlated with the essential objectives. The problem is to minimize the following objective functions:

(13)

with

(14)

where is the number of decision variables, (typically set as ) denotes the number of elements in , are column vectors with non-negative elements, and are nonlinearly and nondecreasingly mapping functions.

Iv-C Test Problem DPF3

The third problem is a -objective minimization problem with essential objectives. The essential objectives do not explicitly exist in the problem, but implicitly exist as follows

(15)

with

(16)

where is the number of decision variables, (typically set as ) denotes the number of elements in , and . Please note that the obtained objective functions are partially linear with the essential objective functions in DPF3. From the definition of the essential objective functions, we can see that the search space has a variable density of solutions due to the bias transforming on the decision variables. The essential objective is locally linear correlated with the last objectives of the defined problem.

Iv-D Test Problem DPF4

In the fourth test problem, we construct an -objective minimization problem with essential objectives. Different from DPF3, the objectives of this problem are nonlinearly correlated with the essential objectives as

(17)

with

(18)

where is the number of decision variables, (typically set as ) denotes the number of elements in , , and are nonlinear and nondecreasing mapping functions. This problem is multimodal as has local minima. Extracting essential objectives in this problem is more difficult than that in DPF3 because of the nonlinear mapping between the problem objective set and the essential objective set.

Fig. 3: The solution set obtained by NSGA-II after generations optimization on DPF2 with quadratic and sigmoid transforming functions in (a) and (b), respectively, where the red points are sampled from the PF and the blue dots denote the solutions.

Iv-E Test Problem DPF5

In the fifth test problem, we construct a minimization problem whose correlations between the objectives differ in different PF segments as

(19)

with

(20)

where is the number of decision variables, (typically set as ) denotes the number of elements in . From the definition of DPF5, we can see that the objective vectors of the PF satisfy that , and the decision variables are partially separable. The PF of this problem contains a degenerate Pareto-optimal segment of dimensions and a non-degenerate Pareto-optimal segment of dimensions.

Iv-F Setting of Parameters and Mapping Functions in DPF

(a) DPF1
(b) DPF2
(c) DPF3
(d) DPF4
(e) DPF5
Fig. 4: The reference points sampled from the PFs of -objective test problems DPF1 to PDF5 that are with essential objectives.

We assign randomly generated numbers to the elements of in DPF1-DPF2 and the parameters in DPF3-DPF4. Despite that the test instances are easy to implement, they may differ in different runs due to the randomly generated parameters, which makes the comparisons of the statistical results hard. To guarantee that these parameters are unchanged in different independent runs, we use a chaos-based pseudo random number generator by following [24]. The generated numbers are

(21)

where and are parameters for the logistic map in (21), and typically set as and , respectively. The generated numbers are sequently assigned to the elements of in DPF1-DPF2, and the parameters in DPF3-DPF4 are set as the increasedly sorted results of the generated numbers.

To preserve the dominance relation between the decision vectors, the nonlinearly mapping functions in DPF2 and DPF4 have to be non-decreasingly functions and increasingly functions, respectively. The choice of the mapping functions is critical for the test problems. Many widely-used increasing functions can be selected to construct the instances of our proposed test problems. However, it is notable that different mapping functions induce different levels of difficulties to the test problems.

Fig. 3 shows two solution sets obtained by NSGA-II on DPF2 with the mappings of the quadratic function

(22)

and the sigmoid function

(23)

From Fig. 3(a), we can see that NSGA-II can obtain a solution set that has a good convergence and diversity to the PF of the problem. While from Fig. 3(b), it is clear that most of the solutions are far from the PF. It illustrates that DPF2 with the sigmoid function is more difficult to be optimized than DPF2 with the quadratic function. The potential reason may be that the input values of the sigmoid function are mapped to values that are approximately equal to one, which decreases the distinction between two different inputs. In this paper, we adopt the quadratic function as the nonlinearly mapping function for DPF2 and DPF4.

Fig. 4 shows the scatter plots of the PFs of DPF1 to DPF5 with and . From the results, we can see that the PFs of DP1-DPF4 lie in a D manifold, and the PF of DPF5 contains a curve and a part of a D spherical surface. Furthermore, this test suite has a variety of features, i.e., the Pareto optimal geometry, modality, PF shape, etc, and a set of recommendations, i.e., scalable number of objectives and variables, Pareto optima known, dissimilar trade-off ranges, etc.

V Computational evaluations

This section is devoted to the experimental investigation of the proposed test problem, with the focus on its difference from existing degenerate problems. To do so, we first examine the performance of several state-of-the-art MOEAs, most of which have been found to be promising in the existing degenerate problems. Then, we look into the impact of the proposed three characters and compare the proposed problems with a dominantly-used degenerate problem by demonstrating the performance difference of five representative objective reduction methods on them.

V-a Tested MOEAs

In the experiments,

MOEAs are tested on the proposed test problems, including classical MOEAs, the algorithms designed specially for MaOPs, and MOEAs that are based on objective reduction. These ten MOEAs are the nondominated sorting genetic algorithm II (NSGA-II) 

[25], the multiobjective evolutionary algorithm based on decomposition (MOEA/D) [26], the indicator-based evolutionary algorithm (IBEA) [27], the reference vector guided evolutionary algorithm (RVEA) [28], the strength Pareto evolutionary algorithm 2 [29]

with the shift-based density estimation (SDE) strategy 

[30] (SPEA2+SDE), the algorithm for minimum objective subset problem (-MOSS) [31], the algorithm for finding a minimum objective subset of size k with minimum error (k-EMOSS) [31], the objective space participation based evolutionary algorithm (OSP) [21], the objective reduction algorithm based on nonlinear correlation information entropy (NCIE) [22], and the objective reduction algorithm with multiobjective search (ORMOS) [23]. Please note that the last five methods are objective reduction methods, and first four of them are incorporated into NSGA-II and the ORMOS method is incorporated into SPEA2+SDE, to obtain the PS of MaOPs in our experiments.

V-B Parameter Settings for Tested MOEAs

A simulated binary crossover (SBX) with the probability

and a polynomial mutation with the probability (where denotes the number of decision variables) are used for all MOEAs, and their distribution indexes are both set as as recommended in [32]. The parameters in the MOEAs are set by following the suggestion in their original papers. MOEA/D has two commonly-used achievement scalarizing functions, Tchebycheff and penalty-based boundary intersection (PBI). In this study, we use the later one, and set the neighborhood size as and the penalty parameter as . For IBEA, we set the parameter as . For RVEA, the adaptive frequency and the parameter are set as and , respectively. The parameter is set as in -MOSS. The parameter is set as the same value as in the test instance for k-EMOSS. For OSP, the number of subsets of the partition is set as , and we set the threshold value of as for ORMOS. Since the algorithms -MOSS, k-EMOSS, NCIE, and OSP are incorporated into NSGA-II, and ORMOS is incorporated into SPEA2+SDE, we execute the objective reduction for every generations in the NSGA-II and SPEA2+SDE.

V-C Performance Metrics

To compare the performance of the MOEAs on the proposed test problems, the inverted generational distance (IGD) [33, 34] is adopted in the experiments.

IGD measures the average distance from the points in the PF to their closest solution in the obtained solution set. It can provide a combined information about convergence and diversity of a solution set [34]. Mathematically, let be a reference set representing the PF, and be a set of solutions obtained by an MOEA. The IGD value between and is defined as

(24)

where denotes the minimal Euclidean distance from to the elements in . A small IGD value indicates that the obtained solution set is close to the PF and has a good distribution as well. To calculate the value of IGD, we have to provide a reference set representing the PF. In our experiments, points are uniformly sampled from the true PFs to construct the set of .

V-D Results of MOEAs

In the proposed test problems, three parameters should be provided, i.e., the number of objectives , the number of essential objectives , and the number of decision variables . In the first experiment, we test the instances with and , respectively. The number of decision variables is set as the recommending value as stated in the corresponding problems. The maximum number of generations is taken as the termination condition, which is set to and for the instances with , and objectives, respectively. For MOEA/D and RVEA, the population size is determined by simplex-lattice design factor and the number of objectives. We follow the setting in [28] where the population size is specified to and for the problems with , and objectives, respectively. In this experiment, times of Monte Carlo simulations are conducted on each instance for each algorithm, and the statistical results of the MOEAs are reported in Tab. II.

For -objective test instances, the Pareto-based algorithms NSGA-II and SPEA2+SDE achieve competitive performance results. The objective reduction-based algorithms, -MOSS, k-EMOSS, and NCIE, improve the performance of NSGA-II slightly on DPF1 and DPF2, while they are inferior to NSGA-II on DPF3 and DPF4. This is due to that these four algorithms select a subset of the original objective set as the criterions to optimize the problem, they perform well on the problems with explicitly redundant objectives, e.g., DPF1 and DPF2, but might not work on the problems with implicitly redundant objectives, e.g., DPF3 and DPF4. ORMOS performs worse than SPEA2+SDE on all of the test problems and fails to converge to the PF on DPF4. The decomposition-based methods, MOEA/D and RVEA perform not well on these degenerate problems as only a small proportion of the weight vectors are close to the PF. The k-EMOSS algorithm performs worst on DPF5 since k-EMOSS only selects two of the objectives to optimize at one time but DPF5 has a non-degenerate PF region.

For -objective test instances, SDE and ORMOS obtain the best IGD values on DPF1 and DPF2, respectively. IBEA achieves the best results on DPF3 and DPF5, and NSGA-II outperforms the others on DPF4. In addition, the objective selection-based methods, -MOSS, k-EMOSS, NCIE, OSP, and ORMOS can obtain competitive performance compared with the best performance algorithms on DPF1 and DPF2, where the essential objectives are explicitly included in the objective set of the problem. While they are much inferior to the best performance algorithms on DPF3 and DPF4, where the essential objectives are not explicitly included. The objective reduction methods have to extract the essential objectives instead of selecting them from the objective set of the problem in DPF3 and DPF4.

In terms of -objective test instances, SPEA2+SDE achieves the best IGD values on DPF1 and DPF5, IBEA on DPF2 and DPF3, and NSGA-II on DPF4. Some of the objective reduction-based algorithms perform better than its integrated method, and some of them perform worse than its integrated method. However, the objective reduction-based algorithms potentially have a much lower computational time cost than the integrated methods since much fewer objectives are considered in the selection stage of the evolution. We can also see that the performance of all tested MOEAs drop dramatically on DPF2, DPF3, and DPF4 when compared with their performance on -objective test instances. To better analyze the results on different test problems, we show the parallel coordinates plot of the results obtained by the algorithms that achieved the best IGD values in the runs in Fig. 5. From the results, we can see that even SPEA2+SDE obtains the best IGD value on DPF1 and DPF5, it fails to find the solutions on the boundary of the PF, and only few of its solutions cover the degenerate part of the PF since the solutions on the degenerate PF should have the same value on the first objectives in Fig. 5(e). From Fig. 5(d), we can see that the objective value range of the solutions obtained by NSGA-II is approximately from to , which is far from the objective value range of the PF from to .

From the above, it can be found that

1) Pareto-based algorithms (or their variants) are good options for low-dimensional degenerate problems, such as NSGA-II and SPEA2+SDE;

2) Decomposition-based algorithms fail to obtain good results on degenerate problems since a large proportion of the weight vectors may be far from the PF;

3) None of all the tested algorithms can obtain good performance on the high-dimensional degenerate instances, which has shown the difficulty of the proposed problem suite.

Test instance Method DPF1 DPF2 DPF3 DPF4 DPF5
m = 3, d = 2 NSGA-II 3.80E-03 (1.22E-03) 2.09E-02 (7.12E-04) 5.17E-03 (2.86E-04) 6.59E-03 (3.55E-04) 4.06E-02 (2.02E-03)
MOEA/D 8.26E-03 (5.44E-03) 1.85E+00 (8.76E-02) 6.31E-02 (1.38E-01) 1.85E-01 (4.28E-03) 4.60E-02 (3.07E-05)
IBEA 8.28E-02 (1.06E-02) 2.32E-02 (6.79E-04) 3.93E-02 (1.43E-01) 7.42E-01 (2.41E-02) 5.27E-02 (2.37E-03)
RVEA 8.93E-02 (1.32E-01) 4.11E-01 (8.69E-02) 7.36E-02 (1.35E-01) 6.00E-01 (4.42E-01) 4.62E-02 (3.77E-05)
SPEA2+SDE 2.46E-03 (2.16E-04) 2.05E-02 (8.16E-04) 9.88E-03 (1.78E-03) 9.16E-02 (2.76E-02) 4.84E-02 (2.83E-03)
-MOSS 3.52E-03 (9.31E-04) 2.10E-02 (4.16E-03) 3.74E-02 (1.47E-01) 2.74E-02 (6.60E-02) 4.14E-02 (2.31E-03)
k-EMOSS 3.38E-03 (9.21E-04) 2.00E-02 (6.61E-04) 2.71E-01 (9.01E-02) 2.26E-01 (4.14E-04) 6.27E-01 (1.30E-01)
NCIE 3.49E-03 (1.21E-03) 2.01E-02 (6.07E-04) 9.27E-02 (9.53E-03) 2.48E+03 (1.47E+03) 1.23E-01 (2.05E-01)
OSP 2.91E-02 (5.24E-02) 5.14E-02 (2.95E-02) 4.85E-02 (1.78E-02) 2.35E-01 (1.76E-01) 2.90E-01 (8.31E-02)
ORMOS 1.63E-02 (6.06E-02) 2.06E-02 (6.51E-04) 3.81E-02 (1.32E-01) 5.27E+04 (9.95E+04) 4.84E-02 (1.90E-03)
m = 6, d = 3 NSGA-II 2.72E-02 (1.36E-03) 3.40E-01 (1.37E-01) 6.17E-02 (2.34E-03) 8.77E-02 (4.72E-03) 3.20E-01 (1.94E-02)
MOEA/D 4.49E-02 (1.82E-04) 6.82E+00 (1.49E+00) 1.84E-01 (1.31E-01) 3.30E-01 (4.72E-03) 2.66E-01 (1.49E-05)
IBEA 1.75E-01 (2.25E-02) 4.34E-01 (3.65E-01) 6.16E-02 (3.43E-03) 9.50E-01 (1.93E-02) 2.44E-01 (4.47E-03)
RVEA 1.06E-01 (3.43E-02) 3.90E+00 (2.37E+00) 1.82E-01 (5.46E-02) 3.00E-01 (4.68E-02) 2.75E-01 (1.37E-04)
SPEA2+SDE 2.05E-02 (2.44E-04) 2.76E-01 (1.77E-02) 6.40E-02 (3.23E-03) 1.95E-01 (1.80E-02) 2.54E-01 (9.90E-03)
-MOSS 5.33E-02 (2.20E-02) 7.02E-01 (1.20E+00) 6.66E-02 (3.30E-03) 1.33E-01 (3.70E-02) 3.15E-01 (1.42E-02)
k-EMOSS 2.70E-02 (1.15E-03) 4.09E-01 (1.46E-01) 2.34E-01 (7.53E-02) 3.58E-01 (2.07E-02) 7.96E-01 (5.23E-02)
NCIE 2.73E-02 (1.17E-03) 2.88E+00 (1.47E+00) 3.21E-01 (2.48E-01) 5.47E-01 (1.19E-01) 9.78E-01 (1.78E-01)
OSP 3.58E-02 (2.25E-03) 1.32E+00 (8.98E-01) 2.01E-01 (6.58E-02) 2.48E-01 (7.38E-02) 7.29E-01 (2.96E-02)
ORMOS 1.84E-01 (1.04E-02) 2.50E-01 (1.48E-02) 1.08E+00 (2.13E-01) 3.15E+05 (1.46E+05) 2.55E-01 (1.10E-02)
m = 10, d = 5 NSGA-II 9.28E-02 (2.32E-03) 2.12E+00 (5.86E-02) 1.85E-01 (3.08E-03) 2.56E-01 (5.67E-03) 2.45E+00 (2.55E-01)
MOEA/D 9.27E-02 (8.07E-04) 2.19E+01 (5.97E+00) 3.94E-01 (1.01E-01) 5.48E-01 (7.69E-03) 4.22E-01 (1.44E-04)
IBEA 2.15E-01 (2.04E-02) 1.90E+00 (9.37E-02) 1.75E-01 (2.86E-03) 1.17E+00 (9.51E-03) 4.11E-01 (3.38E-03)
RVEA 1.65E-01 (1.74E-02) 2.01E+01 (4.61E+00) 3.89E-01 (2.87E-02) 4.78E-01 (2.95E-02) 4.23E-01 (4.38E-04)
SPEA2+SDE 5.42E-02 (2.10E-04) 2.13E+00 (1.37E-01) 1.80E-01 (3.84E-03) 3.75E-01 (1.23E-02) 4.00E-01 (4.26E-03)
-MOSS 9.35E-02 (1.07E-02) 2.11E+00 (7.44E-02) 1.94E-01 (5.37E-03) 3.48E-01 (1.46E-01) 2.40E+00 (2.36E-01)
k-EMOSS 8.37E-02 (1.18E-02) 6.79E+00 (2.37E+00) 2.92E-01 (6.05E-03) 4.09E-01 (7.45E-03) 7.45E-01 (8.71E-02)
NCIE 1.91E-01 (9.07E-02) 6.13E+00 (2.22E+00) 4.67E-01 (9.09E-02) 9.06E-01 (1.17E-01) 1.10E+00 (6.66E-02)
OSP 9.09E-02 (5.11E-03) 1.93E+00 (2.21E-01) 2.19E-01 (1.76E-02) 2.87E-01 (1.59E-02) 6.56E-01 (4.27E-02)
ORMOS 1.74E-01 (3.26E-02) 2.50E+00 (9.32E-02) 1.41E+00 (2.09E-01) 3.25E+05 (1.41E+05) 7.97E-01 (2.07E-01)
TABLE II:

The statistical results (mean and standard deviation) of the IGD values on the proposed test problems. The best result regarding the mean for each problem instance is highlighted in boldface.

(a) SPEA2+SDE on DPF1
(b) IBEA on DPF2
(c) IBEA on DPF3
(d) NSGA-II on DPF4
(e) SPEA2+SDE on DPF5
Fig. 5: Nondominated solutions obtained by the algorithms that achieved the best mean IGD values on -objective DPF1 to PDF5 with in the run associated with its best IGD value. The gray lines represent the reference points sampled from the PFs and the black lines denote the solutions obtained by the MOEAs after generations optimization.

V-E Impact of the Proposed Three Characteristics

In the last section, we have presented the performance of the current state of the arts on the proposed functions and have found that these functions provide big challenges to the algorithms. However, we may not be able to conclude that this underperformance of the considered algorithms is caused by the proposed three characteristics since the tested functions are of different features with respect to their essential objectives (e.g., multi-modal, bias and disconnected). To investigate the impact of the proposed three characteristics, in this section we modify the original DPF by making them have the same essential objectives as DTLZ5(, ), called DPF1A–DPF5A (see Appendix C). Therefore, the performance difference of algorithms between these functions can fully boil down to the proposed characteristics.

In this experiment, we compare the performance of five objective reduction-based MOEAs on DPF1A-DPF4A, DPF5 and DTLZ5(, [10]. We test the instances with for DTLZ5(, ), and for the proposed problems. The number of decision variables of DTLZ5(, ) is set as by following the recommendation in [10], and the same setting for DPF1A–DPF5A and DPF5. We run the tested MOEAs for times with the population size of and the maximum number of executing generations .

(a) -MOSS
(b) k-EMOSS
(c) NCIE
(d) OSP
(e) ORMOS
Fig. 6: The solution set of the five objective reduction-based algorithms on DTLZ5(, ) in the run associated with its best IGD value, where the grid mesh denotes the PF of the problem.
(a) -MOSS on DPF1A
(b) k-EMOSS on DPF1A
(c) NCIE on DPF1A
(d) OSP on DPF1A
(e) ORMOS on DPF1A
(f) -MOSS on DPF2A
(g) k-EMOSS on DPF2A
(h) NCIE on DPF2A
(i) OSP on DPF2A
(j) ORMOS on DPF2A
(k) -MOSS on DPF3A
(l) k-EMOSS on DPF3A
(m) NCIE on DPF3A
(n) OSP on DPF3A
(o) ORMOS on DPF3A
(p) -MOSS on DPF4A
(q) k-EMOSS on DPF4A
(r) NCIE on DPF4A
(s) OSP on DPF4A
(t) ORMOS on DPF4A
(u) -MOSS on DPF5A
(v) k-EMOSS on DPF5A
(w) NCIE on DPF5A
(x) OSP on DPF5A
(y) ORMOS on DPF5A
Fig. 7: The solution set of the five tested algorithms on DPF1A–PDF5A with , in the run associated with its best IGD value (measured in essential objective space), where the grid mesh denotes the PF of the problem in the essential objective space. From top to the bottom are the results on DPF1A to PDF4A, and the degenerate part of DPF5A. For the test instance of DPF5A, the scatter plot only shows the degenerate part of the PF and the solutions whose last objective values are not less than (since the last objective value of the degenerate part of the PF is not less than .)
(a) -MOSS on DPF5A
(b) k-EMOSS on DPF5A
(c) NCIE on DPF5A
(d) OSP on DPF5A
(e) ORMOS on DPF5A
Fig. 8: The solution set of the five algorithms on -objective PDF5A with in the run associated with its best IGD value, where the gray lines represent the reference points sampled from the PF of the problem and the black lines denote the solutions.

The results of the run with the best IGD value (measured in the essential objective space) on DTLZ5(, ) and on the proposed problems are shown in Fig. 6 and Fig. 7, respectively. From the results, we have the following observations:

1) The five algorithms, -MOSS, k-EMOSS, NCIE, OSP and ORMOS, all can obtain a solution set that has a good convergence and diversity to the PF of DTLZ5(, ). Since the first objectives of DTLZ5(, ) are linearly dependent with each other, it makes easy for the objective reduction algorithms to discover the essential objectives of the problem.

2) The solution sets of OSP and ORMOS have a poor diversity on DPF1A compared with the results on DTLZ5(, ), and the other three algorithms can obtain fairly well results. On DPF2A, the performance of k-EMOSS and NCIE decrease slightly compared with the results on DPF1A, and OSP and ORMOS still cannot obtain diverse solution sets. In addition, the results obtained by these algorithms on DPF1A and DPF2A are not as diverse as the results on DTLZ5(, ) since the relation between the redundant objectives and the essential objectives is mutually linearly correlated in DTLZ5(, ), linearly correlated (but not mutually linearly correlated) in DPF1A, and nonlinearly correlated in DPF2A.

3) ORMOS fails to converge to the PF on DPF3A and DPF4A. The other four methods can converge to the PF, but they fail to maintain the diversity of the solution set. These two test problems are harder than DPF1A and DPF2A since their redundant objectives exist implicitly whereas the redundant objectives of DPF1A and DPF2A exist explicitly.

4) None of these algorithms can obtain good results on the degenerate part of the PF on DPF5A. We also show the parallel coordinate plot of the whole solution sets of these five algorithms in Fig. 8, from which we can see that a large number of the solutions obtained by -MOSS do not converge to the PF. The objective value range of -MOSS’s solutions is approximately from to , which is far from that of the PF (from to ). The value of the last objective for the degenerate part of DPF5’s PF is larger than . OSP has only one solution lie on the degenerate part of the PF as shown in Fig. 7(x). Even though k-EMOSS, NCIE and ORMOS can obtain some solutions that lie on the degenerate part of the PF on DPF5A, they fail to obtain diverse solutions in the non-degenerate part of the PF. It can be seen from Fig. 8(b), Fig. 8(c) and Fig. 8(e) that a large part of the PF is not overlaid with the solutions of k-EMOSS, NCIE and ORMOS.

From the above observations, we can see that the proposed three characteristics have a significant impact on the performance of the existing algorithms. They bring different types of difficulties for objective reduction techniques. The implicit redundancy among objectives posts a big challenge to the objective reduction methods based on objective selection. The partial redundancy makes all the methods struggle to find/maintain well-distributed solutions on both degenerate and non-degenerate segments of the PF.

Vi Conclusion

This paper discusses three characteristics that lead to MOPs degenerate, i.e., explicitly redundant objectives, implicitly redundant objectives, and partially redundant objectives. The first two characteristics make the problem with a complete degenerate PF, while the third one results in a partially degenerate PF for the problem.

Five test problems are instantiated based on these three characteristics with a uniform formulation. Among them, DPF1 and DPF2 have explicitly redundant objectives, DFP3 and DPF4 have implicitly redundant objectives, and DPF5 has partially redundant objectives. DPF1 and DPF2 are designed to test the algorithm’s ability of objective selection, DPF3 and DPF4 to test the algorithm’s ability of objective extraction, and DPF5 to test the algorithm’s ability to maintain different sub-populations on the degenerate and the non-degenerate segments of the PF.

Ten representative MOEAs have been tested on the proposed problems. In contrast to existing degenerate problems, our problems have introduced new features (with varying difficulty) that can challenge various objective reduction methods. This has been evidenced in our experimental studies where none of tested MOEAs is able to well solve all the proposed problems. This, therefore, suggests a need of developing new methods to solve MOPs with degenerate PFs.

Appendix A Proof of Theorem 1

Proof of Sufficiency: For any , if in the original objective space, we have that

(25)

and satisfies

(26)

Considering a new objective , we obtain that

(27)

Combining (27) and (25), and based on the fact that is a non-decreasing function corresponding to , it flows that

(28)

such that we can draw the conclusion that the in the new objective space.

Proof of Necessity: For any , if in the new objective space. Since the set of the original objectives is a subset of the set of the new objectives, there are two cases:

a) in first objective space, which directly completes the proof.

b) the objective values of and are equal on the original objectives, i.e.,

(29)

and satisfies

(30)

From the definition of the objective , we have

(31)

Combining the results in (29) and (31), we have , which contradicts with the result in (30). This means the case b) does not exist, i.e., holds in the original objective space. This completes the proof.

Appendix B Proof of Theorem 2

Let us first consider the situation of the problem with three objectives, and there are only two essential objectives.

Proof of Sufficiency: For any , if in the original objective space, we have that

(32)

and satisfies

(33)

Supposing that

(34)

we obtain that

(35)

and the following three possibilities:

Case 1):

(36)

Case 2):

(37)

Case 3):

(38)

Based on (35) and (36)-(38), we can draw the conclusion that the in the new objective space.

Proof of Necessity: For any , if in the new objective space, we have that

(39)

and satisfies

(40)

If , it is clear that

(41)

If , we have that

(42)

and the following two possibilities:

Case 1):

(43)

Case 2):

(44)

Combining the results in (41) and (42)-(44), we find that holds in the original objective space as well.

It is clear that the above analysis is also true to the situation with any number of objectives. This completes the proof.

Appendix C Essential Objectives of DPF1A-DPF5A

The definition of the essential objectives of DPF1A-DPF4A and the degenerate part of DPF5A is the same as that of DTLZ5(, [10]:

(45)

where is the number of decision variables, and .

References

  • [1] H. Ishibuchi, H. Masuda, and Y. Nojima, “Pareto fronts of many-objective degenerate test problems,”

    IEEE Transactions on Evolutionary Computation

    , vol. 20, no. 5, pp. 807–813, Oct 2016.
  • [2] P. Jain and A. M. Agogino, “Theory of design: An optimization perspective,” Mechanism and Machine Theory, vol. 25, no. 3, pp. 287–303, 1990.
  • [3] K. Musselman and J. Talavage, “A tradeoff cut approach to multiple objective optimization,” Operations Research, vol. 28, no. 6, pp. 1424–1435, 1980.
  • [4] L. Gu, R. Yang, C.-H. Tho, M. Makowskit, O. Faruquet, and Y. L. Y. Li, “Optimisation and robustness for crashworthiness of side impact,” International Journal of Vehicle Design, vol. 26, no. 4, pp. 348–360, 2001.
  • [5] A. Sinha, D. K. Saxena, K. Deb, and A. Tiwari, “Using objective reduction and interactive procedure to handle many-objective optimization problems,” Applied Soft Computing, vol. 13, no. 1, pp. 415–427, 2013.
  • [6] R. M. Hierons, M. Li, X. Liu, S. Segura, and W. Zheng, “SIP: Optimal product selection from feature models using many-objective evolutionary optimisation,” ACM Transactions on Software Engineering and Methodology, vol. 25, no. 3, 2016.
  • [7] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multiobjective optimization,” in Evolutionary Multiobjective Optimization: Theoretical Advances and Applications, A. Abraham, L. Jain, and R. Goldberg, Eds.    London: Springer London, 2005, pp. 105–145.
  • [8] S. Huband, L. Barone, L. While, and P. Hingston, “A scalable multi-objective test problem toolkit,” in International Conference on Evolutionary Multi-Criterion Optimization.    Springer, Berlin, Heidelberg, 2005, pp. 280–295.
  • [9] S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Trans. on Evolutionary Computation, vol. 10, no. 5, pp. 477–506, October 2006.
  • [10] K. Deb and D. K. Saxena, “On finding Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems,” Indian Institute of Technology Kanpur, Tech. Rep. 2005011, 2005.
  • [11] R. G. Karlsson and M. H. Overmars, “Scanline algorithms on a grid,” BIT Numerical Mathematics, vol. 28, no. 2, pp. 227–241, 1988.
  • [12] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Many-objective and many-variable test problems for visual examination of multiobjective search,” in 2013 IEEE Congress on Evolutionary Computation, June 2013, pp. 1491–1498.
  • [13] Y. Cheung, F. Gu, and H. L. Liu, “Objective extraction for many-objective optimization problems: Algorithm and test problems,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 755–772, Oct 2016.
  • [14] M. Li, S. Yang, and X. Liu, “A test problem for visual investigation of high-dimensional multi-objective search,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC), 2014, pp. 2140–2147.
  • [15] M. Li, C. Grosan, S. Yang, X. Liu, and X. Yao, “Multiline distance minimization: A visualized many-objective test problem suite,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 61–78, Feb 2018.
  • [16] D. K. Saxena, Q. Zhang, J. A. Duro, and A. Tiwari, “Framework for many-objective test problems with both simple and complicated pareto-set shapes,” in International Conference on Evolutionary Multi-Criterion Optimization.    Springer, 2011, pp. 197–211.
  • [17] H.-L. Liu, L. Chen, Q. Zhang, and K. Deb, “Adaptively allocating search effort in challenging many-objective optimization problems,” IEEE Transactions on Evolutionary Computation, 2017.
  • [18] D. Brockhoff and E. Zitzler, “Objective reduction in evolutionary multiobjective optimization: Theory and applications,” Evolutionary Computation, vol. 17, no. 2, pp. 135–166, 2009.
  • [19] H. K. Singh, A. Isaacs, and T. Ray, “A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems,” IEEE Trans. on Evolutionary Computation, vol. 15, no. 4, pp. 539–556, August 2011.
  • [20] D. K. Saxena, J. A. Duro, A. Tiwari, K. Deb, and Q. Zhang, “Objective reduction in many-objective optimization: Linear and nonlinear algorithms,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 1, pp. 77–99, 2013.
  • [21] A. L. Jaimes, C. A. C. Coello, H. Aguirre, and K. Tanaka, “Objective space partitioning using conflict information for solving many-objective problems,” Information Sciences, vol. 268, pp. 305–327, 2014.
  • [22] H. Wang and X. Yao, “Objective reduction based on nonlinear correlation information entropy,” Soft Computing, vol. 20, no. 6, pp. 2393–2407, 2016.
  • [23] Y. Yuan, Y. S. Ong, A. Gupta, and H. Xu, “Objective reduction in many-objective optimization: Evolutionary multiobjective approaches and comprehensive analysis,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 2, pp. 189–210, April 2018.
  • [24] R. Cheng, Y. Jin, M. Olhofer, and B. sendhoff, “Test problems for large-scale multiobjective and many-objective optimization,” IEEE Transactions on Cybernetics, vol. 47, no. 12, pp. 4108–4121, Dec 2017.
  • [25] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, April 2002.
  • [26] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. on Evolutionary Computation, vol. 11, no. 6, pp. 712–731, December 2007.
  • [27] E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective search,” in International Conference on Parallel Problem Solving from Nature.    Springer, Berlin, Heidelberg, 2004, pp. 832–842.
  • [28] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” IEEE Transactions on Evolutionary Computation, vol. 20, no. 5, pp. 773–791, Oct 2016.
  • [29] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization,” in Evolutionary Methods for Design, Optimisation and Control.    International Center for Numerical Methods in Engineering, 2002, pp. 95–100.
  • [30] M. Li, S. Yang, and X. Liu, “Shift-based density estimation for Pareto-based algorithms in many-objective optimization,” IEEE Trans. on Evolutionary Computation, vol. 18, no. 3, pp. 348–365, June 2014.
  • [31] D. Brockhoff and E. Zitzler, “Improving hypervolume-based multiobjective evolutionary algorithms by using objective reduction methods,” in 2007 IEEE Congress on Evolutionary Computation, Sept 2007, pp. 2086–2093.
  • [32] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 4, pp. 577–601, Aug 2014.
  • [33] P. A. N. Bosman and D. Thierens, “The balance between proximity and diversity in multiobjective evolutionary algorithms,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 174–188, April 2003.
  • [34] C. A. C. Coello and M. R. Sierra, “A study of the parallelization of a coevolutionary multi-objective evolutionary algorithm,” in

    Proceedings of the 3rd Mexican International Conference on Artificial Intelligence

    .    Springer, 2004, pp. 688–697.