1 Introduction
Many reallife disciplines (e.g., optimal design [1], economics [2] and water management [3]) often involve optimization problems having multiple conflicting objectives, known as multiobjective optimization problems (MOPs). Rather than a global optimum that optimizes all objectives simultaneously, in multiobjective optimization, decision makers often look for a set of Paretooptimal solutions which consist of the best tradeoffs among conflicting objectives. The balance between convergence and diversity is the cornerstone of multiobjective optimization. In particular, the convergence means the approximation to the Paretooptimal set should be as close as possible; while the diversity means the spread of the tradeoff solutions should be as uniform as possible.
Evolutionary algorithm, which in principle can approximate the Paretooptimal set in a single run, has been widely accepted as a major approach for multiobjective optimization [4]. Over the past three decades, many efforts have been devoted in developing evolutionary multiobjective optimization (EMO) algorithms and have obtained recognized performance on problems with two or three objectives [5, 6, 7, 8, 9, 10, 11, 12]. With the development of science and technology, reallife optimization scenarios bring more significant challenges, e.g., complicated landscape, multimodality and high dimensionality, to the algorithm design. As reported in [13, 14, 15], the performance of traditional EMO algorithms severely deteriorate with the increase of the number of objectives. Generally speaking, researchers owe the performance deterioration to three major reasons, i.e., the loss of selection pressure for Pareto domination [14]
, the difficulty of density estimation in a highdimensional space
[16] and its anticonvergence phenomenon [17], and the exponentially increased computational complexity [18].As a remedy to the loss of selection pressure for Pareto domination in a highdimensional space, many modified dominance relationships have been developed to strengthen the comparability between solutions, e.g., dominance [19], fuzzy Paretodominance [20], optimality [21], preference order ranking [22], control of dominance area [23] and grid dominance [24]. Very recently, a generalized Paretooptimality was developed in [25] to expand the dominance area of Pareto domination, so that the percentage of nondominated solutions in a population does not increase dramatically with the number of objectives. Different from [23], the expansion envelop for all solutions are kept the same in [25]. Other than the Pareto dominancebased approaches, optimality was proposed in [26] to help rank the solutions.
The loss of selection pressure can also be remedied by an effective diversity maintenance strategy. To relieve the anticonvergence phenomenon, [17] applied the maximum spread indicator developed in [27] to activate and deactivate the diversity promotion mechanism in NSGAII [5]. To facilitate the density estimation in a highdimensional space, [16] proposed to replace the crowding distance used in NSGAII by counting the number of associated solutions with regard to a predefined reference point. In particular, a solution is considered being associated with a reference point if it has the shortest perpendicular distance to this reference point. Instead of handling the convergence and the diversity separately, [28] proposed to coevolve a set of target vectors, each of which represents a optimization goal. The fitness value of a solution is defined by aggregating the number of current goals it achieves, i.e., dominates. The population diversity is implicitly maintained by the goals widely spread in the objective space; while the selection pressure towards the convergence is strengthened by coevolving the optimization goals with the population.
The exponentially increased computational costs come from two aspects: 1) the exponentially increased computational complexity for calculating the hypervolume, which significantly hinders the scalability of the indicatorbased EMO algorithms; 2) the significantly increased computational costs for maintaining the nondomination levels [5] of a large number of solutions in a highdimensional space. As for the former aspect, some improved methods, from the perspective of computational geometry [29, 30, 31] or Monte carlo sampling [18], have been proposed to enhance the efficiency of hypervolume calculation. As for the latter aspect, many efforts have been devoted to applying some advanced data structures to improve the efficiency of nondominated sorting procedure [32, 33, 34]. It is worth noting that our recent study [35] showed that it can be more efficient to update the nondomination levels by leveraging the population structure than to sort the population from scratch in every iteration.
As reported in [36, 37, 38], decompositionbased EMO methods have become increasingly popular for solving problems with more than three objectives, which are often referred to as manyobjective optimization problems (MaOPs). Since the decompositionbased EMO methods transform an MOP into several singleobjective optimization subproblems, it does not suffer the loss of selection pressure of Pareto domination in a highdimensional space. In addition, the update of the population relies on the comparison of the fitness values, thus the computational costs do not excessively increase with the dimensionality. As reported in [39], different subproblems, which focus on different regions in the objective space, have various difficulties. Some superior solutions can easily dominantly occupy several subproblems. This is obviously harmful to the population diversity and getting worse with the increase of the dimensionality. To overcome this issue, [40] built an interrelationship between subproblems and solutions, where each subproblem can only be updated by its related solutions. Based on the similar merit, [41] restricted a solution to only updating one of its closest subproblems. In [42], two metrics were proposed to measure the convergence and diversity separately. More specifically, the objective vector of a solution is at first projected onto its closest weight vector. Then, the distance between the projected point and the ideal point measures the solution’s convergence; while the distance between the projected point and the original objective vector measures the solution’s diversity. At the end, a diversityfirst update scheme was developed according to these two metrics. Analogously, [43] developed a modified achievement scalarizing function as the convergence metric while an anglebased density estimation method was employed to measure the solution’s diversity.
Recently, there is a growing trend in leveraging the advantages of the decomposition and Paretobased methods within the same framework. For example, [44] suggested to use the Pareto domination to prescreen the population. The local density is estimated by the number of solutions associated with a predefined weight vector. In particular, a solution located in an isolated region has a higher chance to survive to the next iteration. Differently, [45] developed an anglebased method to estimate the local crowdedness around a weight vector. In addition to the density estimation, the weight vectors divide the objective space into different subspaces, which is finally used to estimate the local strength value [6] of each solution. Analogously, in [46], a nondominated sorting is conducted upon all the subspaces, where solutions in different subspaces are considered nondominated to each other. [47] developed a dualpopulation paradigm which coevolves two populations simultaneously. These populations are maintained by different selection mechanisms respectively, while their interaction is implemented by a restricted mating selection mechanism.
Although the decompositionbased EMO methods have been extensively used for MaOPs, a recent comprehensive study [38] demonstrated that the performance of decompositionbased EMO methods strongly depends on the shape of the Pareto front (PF). This phenomenon can be attributed to two major reasons:

Most, if not all, decompositionbased EMO methods merely consider a single scalarizing function in fitness assignment. Since the contour line of a scalarizing function does not adapt to a particular problem’s characteristic, the flexibility is restricted.

As discussed in the previous paragraph, different regions of the PF have various difficulties. The balance between convergence and diversity of the search process can be easily biased by some dominantly superior solutions. The increasing dimensionality exacerbates this phenomenon.
Bearing the above considerations in mind, this paper develops a new decompositionbased method, called adversarial decomposition, for manyobjective optimization. Generally speaking, it has the following three features:

It maintains two coevolving populations simultaneously, each of which is maintained by a different scalarizing function. In particular, one population uses the boundary intersectionbased scalarizing function, while the other one applies a modified achievement scalarizing function. In this regard, the search behaviors of these two populations become different, where one is convergenceoriented while the other is diversityoriented.

In order to make these two populations complement each other, they use ideal and nadir points respectively as the reference point in their scalarizing functions. By doing this, the two populations are evolved following two sets of adversarial search directions.

During the mating selection process, two populations are at first stably matched to form a set of oneone solution pairs. In particular, solutions within the same pair concentrate on similar regions of the PF. Thereafter, each solution pair can at most contribute one mating parent for offspring generation. By doing this, we can expect an uniformly spread of the computational efforts over the entire PF.
The remainder of this paper is organized as follows. sec:preliminary provides some preliminaries useful to this paper. sec:proposal describes the technical details of our proposed method step by step. The empirical studies are presented and discussed in sec:setup and sec:experiments. Finally, sec:conclusion concludes the paper and provides some future directions.
2 Preliminaries
This section first provides some basic definitions of multiobjective optimization. Afterwards, we briefly introduce the decomposition of multiobjective optimization.
2.1 Basic Definitions
The MOP considered in this paper can be mathematically defined as follows:
(1) 
where is a dimensional decision variable vector from the decision space and is a dimensional objective vector in the objective space .
Definition 1.
Given solutions , is said to dominate , denoted by , if and only if for all and .
Definition 2.
A solution is called Paretooptimal if and only if there is no other solution dominates it.
Definition 3.
The Paretooptimal set (PS) is defined as the set of all Paretooptimal solutions. Their corresponding objective vectors form the Paretooptimal front (PF).
Definition 4.
The ideal point is , where for all . The nadir point is , where .
2.2 Decomposition
Under some mild conditions, the task of approximating the PF can be decomposed into several scalar optimization subproblems, each of which is formed by a weighted aggregation of all individual objectives [48]. In the classic multiobjective optimization literature [49], there have been several established approaches for constructing scalarizing functions, among which the weighted Tchebycheff (TCH) and penaltybased boundary intersection (PBI) [48] are the most popular ones. More specifically, the TCH function is mathematically defined as:
(2) 
where is a user specified weight vector, for all and . Note that is set to be a very small number, say , in case . The contour line of the TCH function is shown in fig:decomposition(a) where . From this figure, we can clearly see that the control area (i.e., the area that holds better solutions) of the TCH function is similar to the Pareto domination defined in def:pareto, e.g., solutions located in the gray shaded area (i.e., the control area of ) are better than . Note that the TCH function cannot discriminate the weakly dominated solution [49]. For example, the TCH function values of and are the same, but according to def:pareto.
As for the PBI function, it is mathematically defined as:
(3) 
where
(4) 
As discussed in [44], and measure the convergence and diversity of with regard to , respectively. The balance between convergence and diversity is parameterized by , which also controls the contour line of the PBI function. In fig:decomposition(b), we present the contour lines of PBI functions with different settings.
3 ManyObjective Optimization Algorithm Based on Adversarial Decomposition
In this section, we introduce the technical details of our proposed manyobjective optimization algorithm based on adversarial decomposition, denoted as MOEA/AD whose pseudo code is given in alg:moead2p, step by step. At the beginning, we initialize a population of solutions via random sampling upon ; the ideal and nadir points; a set of weight vectors and build their neighborhood structure according to the method in [44]. Afterwards, we assign directly to the two coevolving populations, i.e., diversity population and convergence population . Note that and share the same weight vectors, each of which corresponds to a unique subproblem for and respectively. To facilitate the mating selection process, we initialize a matching array and a sentinel array , where indicates a solution in is paired with a solution in and indicates whether this pair of solutions work in similar regions of the PF. During the main while loop, the mating parents are selected from the solution pairs. The generated offspring solution is used to update and separately. After each generation, we divide the solutions in into different solution pairs for the next round’s mating selection process. The major components of MOEA/AD are explained in the following subsections.
3.1 Adversarial Decomposition
As discussed in sec:introduction, the flexibility of the decompositionbased method is restricted due to the use of a single scalarizing function. Bearing this consideration in mind, this paper develops an adversarial decomposition method. Its basic idea is to maintain two coevolving and complementary populations simultaneously, each of which is maintained by a different scalarizing function.
More specifically, one population is maintained by the PBI function introduced in sec:decomposition, where we set as recommended in [44]. The other population is maintained by a modified augmented achievement scalarizing function (MAASF) defined as follows:
(5) 
where is an augmentation coefficient. Comparing with the TCH function in eq:TCH, the MAASF uses the nadir point to replace the ideal point and the absolute operator is removed to allow to be smaller than , where . Furthermore, the augmentation term in the MAASF helps avoid weakly Paretooptimal solutions. As shown in fig:decomposition(c), the contour line of the MAASF is the same as that of the TCH function in case ; while the control area of the MAASF becomes wider when setting . In this case, the MAASF is able to discriminate the weakly dominated solution, e.g., the MAASF value of in fig:decomposition(c) is better than that of when . Here we use a sufficiently small as recommended in [50].
To deal with problems having different scales of objectives, we normalize the objective values before using the scalarizing function. By doing this, the PBI function becomes:
(6) 
where and for all . The MAASF is rewritten as:
(7) 
In the following paragraphs, we will discuss the complementary effects achieved by the PBI function and MAASF.

As shown in fig:decomposition(b) and fig:decomposition(c), the control areas of the PBI function with is much narrower than that of the MAASF. In this case, the population is driven towards the corresponding weight vector by using the PBI function as the selection criteria. Moreover, comparing to the MAASF, the control areas shared by different weight vectors are smaller in the PBI function. Accordingly, various weight vectors have a larger chance to hold different elite solutions and we can expect an improvement on the population diversity. On the other hand, due to the narrower control area, the selection pressure, with regard to the convergence, of the PBI function is not as strong as the MAASF. In other words, some solutions, which can update the subproblem maintained by a MAASF, might not be able to update the subproblem maintained by a PBI function. For example, as shown in fig:decomposition(b), although , the PBI function value of is worse than that of with regard to . In this case, the PBI function has a high risk of compromising the population convergence.

The other reason, which results in the different behaviors of the PBI function and MAASF, is their adversarial search directions by using the ideal point and nadir point respectively. More specifically, the PBI function pushes solutions toward the ideal point as close as possible; while the MAASF pushes the solutions away from the nadir point as far as possible. Therefore, given the same set of weight vectors, the search regions of the PBI function and MAASF complement each other. For example, for a convex PF shown in fig:weights(a), solutions found by the PBI function using the ideal point concentrate on the central region of the PF; in contrast, those found by the MAASF using the nadir point fill the gap between the central region and the boundary. As for a concave PF shown in fig:weights(b), solutions found by the PBI function using the ideal point sparsely spread over the PF; while those found by the MAASF using the nadir point improve the population diversity.
In summary, by using the scalarizing functions introduced above, i.e., the PBI function and the MAASF, the adversarial decomposition method makes the two coevolving populations become complementary, i.e., one is diversityoriented (denoted as the diversity population ) and the other is convergenceoriented (denoted as the convergence population ). In addition, the search regions is also diversified so that the solutions are able to cover a wider range of the PF.
3.2 Population Update
After the generation of an offspring solution , it is used to update and separately. Note that the optimal solution for each subproblem is assumed to be along its corresponding reference line that connects the origin and the weight vector [40]. Thus, to make as diversified as possible, we expect that different subproblems can hold different elite solutions. In this case, we restrict to only updating its closest subproblem (as shown in line 1 to line 5 of alg:update). As for , its major purpose is to propel the population to the PF. To accelerate the convergence progress, we allow a dominantly superior solution to take over more than one subproblem, say . In particular, we at first sort the priorities of different subproblems according to their distances to . It can update its closest subproblems in case has a better MAASF function value (as shown line 6 to line 15 of alg:update). It is worth noting that we reserve two additional terms, in line 13 of alg:update, to facilitate the mating selection procedure introduced in sec:mating. One is the degree of closeness of the updated solution to its corresponding subproblem, denoted as ; the other is the index of this solution’s closest subproblem, denoted as .
3.3 Mating Selection and Reproduction
The interaction between the two coevolving populations is an essential step in algorithms that consider multiple populations [47, 51]. To take advantage of the complementary effects between and , the interaction is implemented as a restricted mating selection mechanism that chooses the mating parents according to their working regions. Generally speaking, it contains two consecutive steps: one is the pairing step that makes each solution in be paired with a solution in ; the other is the mating selection step that selects the appropriate parents for the offspring reproduction. We will illustrate them in detail as follows.
3.3.1 Pairing Step
To facilitate the latter mating selection step, the pairing step divides the two populations into different solution pairs, each of which contains two solutions from and respectively. This is achieved by finding a oneone stable matching between the solutions in and . As a result, solutions in the same pair are regarded to have a similar working regions of the PF.
To find a stable matching between solutions in and , we need to define their mutual preferences at first. Specifically, since each solution in is close to its corresponding subproblem, we define the preference of a solution in (denoted as ) to a solution in (denoted as ) as:
(8) 
where is the weight vector of the subproblem occupied by . As for the preference of to , it is defined as:
(9) 
Then, we sort the preference list of each solution in an ascending order and apply our recently developed twolevel oneone stable matching method [52, 53] to find a stable matching. Note that the twolevel stable matching method is able to match each agent with one of its most preferred partners. Since a solution of an objective problem always locates within the local area between closest weight vectors, the length of the preference list is reduced to in the firstlevel matching process [53]. As a result, the matched solutions in the firstlevel stable matching should work on the similar regions of the PF. For example, as shown in fig:match, the solution pairs formed in the firstlevel stable matching are surrounded by the red solid curves. From this figure, we can see that these matched solutions are close to each other and work on the similar regions. Therefore, we set the corresponding index of a sentinel array to 1, i.e., where and denote the corresponding subproblems have collaborative information. During the secondlevel matching process, the remaining solutions are matched based on the complete preference lists. Note that the matched solutions in the secondlevel stable matching are not guaranteed to work on the similar regions any longer. As shown in fig:match, the solution pair formed in secondlevel stable matching, surrounded by the red dotted curve, are away from each other. Thus, we set . The pseudo code of this pairing step is presented in alg:stm2l.
3.3.2 Mating Selection Step
The mating parents consist of two parts: one is the principal parent; the others are from its neighbors. Note that each solution pair is only allowed to contribute at most one mating parent to avoid wasting the computational resources upon the similar regions. Given a solution pair , the first step is to decide the population from which the principal parent is selected. This depends on the following three criteria.

The first criterion is the subproblem’s relative improvement. As for , it is defined as:
(10) where and are respectively the current and previous solution of the th subproblem in , respectively. As for , it is defined as:
(11) If and are in a tie, we need the following two secondary criteria for decision making.

As discussed in sec:ad, a solution found by the PBI function is not guaranteed to be nondominated. If is dominated by a solution in , it is inappropriate to be chosen as a mating parent.

As discussed in sec:update, a dominantly superior solution can occupy more than one subproblem by considering the MAASF. In this case, such solution may occupy a subproblem away from its working region, which makes it inadequate to be a mating parent.
The pseudo code of the principal parent selection mechanism is given in alg:popSelect. If the subproblem’s relative improvement of is larger than that of , it means that the corresponding subproblem of has a higher potential for further improvement. And the principal parent should be selected from , i.e., . Otherwise, will be chosen as the principal parent (line 1 to line 4 of alg:popSelect). If and are in a tie, we need to check the domination status of and ’s closeness to the corresponding subproblem (line 6 to line 11 of alg:popSelect). By comparing the subproblems’ relative improvements, we can expect an efficient allocation of the computational resources to different regions of the PF. The two secondary criteria implicitly push the solutions towards the corresponding weight vectors thus improve the population diversity.
After the selection of the principal parent, the other mating parent are selected from the neighbors of the subproblem occupied by the principal parent. More specifically, if the principal parent is from , we store the solutions of its neighboring subproblems from both and into a temporary mating pool . Note that only those subproblems having collaborative information (i.e., ) are considered when choosing solutions from (line 5 and line 8 of alg:matingSelect). On the other hand, if the principal parent is from , only solutions from have the chance to be stored in . Note that we do not consider the solution that has the same closest subproblem as the principal parent (line 12 and line 14 of alg:matingSelect). Once is set up, the other mating parents are randomly chosen from it.
4 Experimental Setup
In this section, we describe the setup of our empirical studies, including the benchmark problems, performance indicator, peer algorithms and their parameter settings.
4.1 Benchmark Problems
Here we choose DTLZ1 to DTLZ4 [56], WFG1 to WFG9 [57], and their minus version proposed in [58], i.e., DTLZ1 to DTLZ4 and WFG1 to WFG9 to form the benchmark problems in our empirical studies. In particular, the number of objectives are set as . The number of decision variables of DTLZ and DTLZ problem instances [56] is set to , where for DTLZ1 and DTLZ1 and for the others. As for WFG and WFG problem instances [57], we set , where and . Note that the DTLZ and WFG benchmark problems have been widely used for benchmarking the performance of manyobjective optimizers; while their minus version is proposed to investigate the resilience to the irregular PF shapes. All these benchmark problems are scalable to any number of objectives.
4.2 Performance Indicator
In our empirical studies, we choose the widely used Hypervolume (HV) indicator [59] to quantitatively evaluate the performance of a manyobjective optimizer. Specifically, the HV indicator is calculated as:
(12) 
where is the solution set, is a point dominated by all Paretooptimal objective vectors and VOL indicates the Lebesgue measure. In our empirical studies, we set and the objective vectors are normalized to before calculating the HV. The larger the HV value is, the better the quality of is for approximating the true PF. Each algorithm is run 31 times independently and the Wilcoxon’s rank sum test at 5% significant level is performed to show whether the peer algorithm is significantly outperformed by our proposed MOEA/AD. Note that we choose the population that has the higher HV value as the output of our proposed MOEA/AD.
4.3 Peer Algorithms
Here we choose nine stateoftheart manyobjective optimizers to validate the competitiveness of our proposed MOEA/AD. These peer algorithms belong to different types, including two decompositionbased algorithms (MOEA/D [48] and Global WASFGA [60]), two Paretobased algorithms (PICEAg [28] and VaEA [61]), two indicatorbased algorithms (HypE [18] and KnEA [62]), two algorithms that integrates the decomposition and Paretobased selection together (NSGAIII [16] and DEA [46]), and an improved twoarchivebased algorithm (TwoArch2 [63]). Some further comments upon the peer algorithms are listed as follows.

Global WASFGA is a decompositionbased algorithm that selects solutions to survive according to the rankings of solutions to each subproblem. Similar to MOEA/AD, it uses the ideal point and nadir point simultaneously in its scalarizing function. However, instead of maintaining two coevolving populations, Global WASFGA only has a single population, where half of it are maintained by the scalarizing function using the ideal point while the other are maintained by the nadir point.

PICEAg coevolves a set of target vectors sampled in the objective space, which can be regarded as a second population and is used to guide the environmental selection.

TwoArch2 maintains two archives via indicatorbased selection and Paretobased selection separately. In particular, an normbased diversity maintenance scheme is designed to maintain the diversity archive.
4.4 Parameter Settings

Weight vector: We employ the method developed in [44] to generate the weight vectors used in the MOEA/D variants. Note that we add an additional weight vector to remedy the missing the centroid on the simplex for the 8, 10 and 15objective cases.

Population size: We set the population size the same as the number of weight vectors. In particular, is set as 91, 210, 157, 276 and 136 for respectively.

Termination criteria: As suggested in [16], the termination criterion is set as a predefined number of generations, as shown in tab:evaluations.
Problem Instance =3 =5 =8 =10 =15 DTLZ1, DTLZ1 400 600 750 1,000 1,500 DTLZ2, DTLZ2 250 350 500 750 1,000 DTLZ3, DTLZ3 1,000 1,000 1,000 1,500 2,000 DTLZ4, DTLZ4 600 1,000 1,250 2,000 3,000 WFG, WFG 400 750 1,500 2,000 3,000 Table 1: Settings of the Number of Generations. 
Reproduction operators
: For the SBX, we set the crossover probability
to 1.0 and the distribution index to 30 [16]. As for the polynomial mutation, the probability and distribution index are set to be and [16], respectively. 
Neighborhood size: [48].

Probability to select in the neighborhood: [48].
The intrinsic parameters of the other peer algorithms are set according to the recommendations in their original papers.
5 Empirical Studies
In this section, we present and discuss the comparison results of our proposed MOEA/AD with the other stateoftheart peer algorithms. The mean HV values are given in tab:DTLZ_2 to tab:DTLZ_v, where the best results are highlighted in boldface with a gray background.
5.1 Comparisons on DTLZ and WFG Problem Instances
Problem  PBI  GWASF  PICEAg  VaEA  HypE  KnEA  NSGAIII  DEA  TwoArch2  MOEA/AD  

3  7.785e+0  7.134e+0  7.562e+0  7.764e+0  7.774e+0  7.307e+0  7.786e+0  7.738e+0  7.785e+0  7.787e+0  
5  3.197e+1  1.965e+1  3.191e+1  3.194e+1  3.186e+1  2.939e+1  3.197e+1  3.197e+1  3.196e+1  3.197e+1  
DTLZ1  8  2.560e+2  7.026e+1  2.482e+2  2.559e+2  1.973e+2  1.739e+2  2.560e+2  2.560e+2  2.559e+2  2.560e+2 
10  1.024e+3  4.761e+2  1.010e+3  1.024e+3  9.573e+2  8.514e+2  1.024e+3  1.024e+3  1.024e+3  1.024e+3  
15  3.275e+4  1.092e+3  2.815e+4  3.275e+4  0.000e+0  2.621e+4  3.276e+4  3.264e+4  3.275e+4  3.270e+4  
3  7.413e+0  7.301e+0  7.376e+0  7.405e+0  7.371e+0  7.393e+0  7.412e+0  7.413e+0  7.407e+0  7.412e+0  
5  3.170e+1  3.037e+1  3.165e+1  3.166e+1  3.117e+1  3.166e+1  3.169e+1  3.170e+1  3.161e+1  3.170e+1  
DTLZ2  8  2.558e+2  2.423e+2  2.551e+2  2.558e+2  2.525e+2  2.558e+2  2.558e+2  2.558e+2  2.554e+2  2.558e+2 
10  1.024e+3  6.398e+2  1.023e+3  1.024e+3  1.017e+3  1.024e+3  1.024e+3  1.024e+3  1.022e+3  1.024e+3  
15  3.276e+4  1.638e+4  3.231e+4  3.276e+4  3.207e+4  3.277e+4  3.276e+4  3.276e+4  3.275e+4  3.276e+4  
3  7.406e+0  6.504e+0  6.849e+0  7.401e+0  7.285e+0  7.236e+0  7.406e+0  7.403e+0  7.413e+0  7.403e+0  
5  3.169e+1  1.600e+1  2.997e+1  3.166e+1  1.332e+1  2.862e+1  3.169e+1  3.169e+1  3.159e+1  3.169e+1  
DTLZ3  8  2.459e+2  1.278e+2  2.081e+2  2.237e+2  0.000e+0  1.542e+2  2.558e+2  2.558e+2  2.542e+2  2.558e+2 
10  1.024e+3  5.115e+2  9.394e+2  9.219e+2  0.000e+0  7.105e+2  1.024e+3  9.805e+2  1.020e+3  1.024e+3  
15  2.328e+4  1.631e+4  2.422e+4  2.674e+4  0.000e+0  0.000e+0  3.276e+4  2.380e+4  3.256e+4  3.276e+4  
3  6.398e+0  6.961e+0  7.053e+0  7.408e+0  7.414e+0  7.396e+0  7.059e+0  7.243e+0  7.069e+0  7.412e+0  
5  3.087e+1  2.710e+1  3.144e+1  3.167e+1  3.149e+1  3.167e+1  3.170e+1  3.170e+1  3.161e+1  3.169e+1  
DTLZ4  8  2.547e+2  1.575e+2  2.552e+2  2.558e+2  2.471e+2  2.558e+2  2.558e+2  2.558e+2  2.551e+2  2.558e+2 
10  1.023e+3  7.165e+2  1.023e+3  1.024e+3  1.019e+3  1.024e+3  1.024e+3  1.024e+3  1.022e+3  1.024e+3  
15  3.276e+4  1.636e+4  3.254e+4  3.276e+4  3.262e+4  3.277e+4  3.276e+4  3.276e+4  3.274e+4  3.276e+4 

According to Wilcoxon’s rank sum test, and indicates whether the corresponding algorithm is significantly worse or better than MOEA/AD respectively.
Generally speaking, MOEA/AD is the most competitive algorithm for the DTLZ problem instances. As shown in tab:DTLZ_2, it wins in 161 out of 180 comparisons. More specifically, for DTLZ1 and DTLZ3, MOEA/AD obtains the largest HV values on all comparisons except for the 15objective DTLZ1 and the 3objective DTLZ3 instances. As for DTLZ2, MOEA/DPBI obtains the best HV value on the 3objective case, while MOEA/AD takes the leading position when the number of objectives becomes large. For DTLZ4, the best algorithm varies with different number of objectives. Even though MOEA/AD loses in 4 out of 45 comparisons, the differences are very slight. In addition, as for two decompositionbased algorithms, MOEA/DPBI obtains significantly worse HV values than MOEA/AD on 14 out 20 comparisons, while Global WASFGA were significantly outperformed on all 20 comparisons. In particular, Global WASFGA fails to approximate the entire PF on all DTLZ instances due to its coarse diversity maintenance scheme. As for the two recently proposed Paretobased manyobjective optimizers, the HV values obtained by PICEAg are significantly worse than MOEA/AD on all problem instances. This can be explained by its randomly sampled target vectors which slow down the convergence speed. VaEA performs slightly better than PICEAg but it is still outperformed by MOEA/AD on 18 out of 20 comparisons. As expected, the performance of two indicatorbased algorithms are not satisfied. In particular, KnEA merely obtains the best HV values on the 15objective DTLZ2 and DTLZ4 instances. DEA and NSGAIII, which combine the decomposition and Paretobased selection methods together, achieve significantly better results than MOEA/AD in 2 and 3 comparisons respectively, where MOEA/AD beats them in 9 and 11 comparisons respectively. TwoArch2, which also maintains two coevolving populations, is significantly outperformed by MOEA/AD on all DTLZ instances except for the 15objective DTLZ1 and the 3objective DTLZ3. Given these observations, we find that the genuine performance obtained by MOEA/AD does not merely come from the two coevolving populations. The adversarial search directions and their collaborations help strike the balance between convergence and diversity.
Problem  m  PBI  GWASF  PICEAg  VaEA  HypE  KnEA  NSGAIII  DEA  TwoArch2  MOEA/AD 

3  5.515e+0  6.031e+0  5.277e+0  5.317e+0  3.739e+0  5.380e+0  4.455e+0  5.317e+0  6.287e+0  6.355e+0  
5  2.780e+1  2.756e+1  2.449e+1  2.005e+1  1.580e+1  2.050e+1  1.737e+1  2.513e+1  2.621e+1  2.988e+1  
WFG1  8  2.165e+2  1.547e+2  2.172e+2  2.172e+2  1.205e+2  1.722e+2  1.252e+2  2.314e+2  2.454e+2  2.330e+2 
10  8.402e+2  7.059e+2  9.725e+2  9.434e+2  4.828e+2  7.520e+2  5.228e+2  9.562e+2  1.002e+3  9.673e+2  
15  2.000e+4  1.563e+4  2.746e+4  3.040e+4  1.374e+4  2.423e+4  1.981e+4  2.263e+4  3.175e+4  2.645e+4  
3  6.832e+0  7.381e+0  7.545e+0  7.445e+0  7.077e+0  7.519e+0  7.423e+0  7.557e+0  7.637e+0  7.186e+0  
5  2.831e+1  3.083e+1  3.179e+1  3.135e+1  3.041e+1  3.166e+1  3.127e+1  3.093e+1  3.181e+1  2.954e+1  
WFG2  8  2.231e+2  1.302e+2  2.497e+2  2.503e+2  2.388e+2  2.543e+2  2.471e+2  2.359e+2  2.558e+2  2.435e+2 
10  9.246e+2  5.203e+2  1.010e+3  1.019e+3  9.801e+2  1.019e+3  1.011e+3  9.597e+2  1.024e+3  9.853e+2  
15  2.949e+4  1.634e+4  3.124e+4  3.195e+4  3.000e+4  3.152e+4  3.038e+4  2.592e+4  3.276e+4  3.133e+4  
3  6.348e+0  7.010e+0  6.988e+0  6.818e+0  6.608e+0  6.870e+0  6.889e+0  6.931e+0  7.048e+0  6.906e+0  
5  2.484e+1  2.763e+1  2.848e+1  2.666e+1  2.695e+1  2.615e+1  2.753e+1  2.810e+1  2.860e+1  2.835e+1  
WFG3  8  1.559e+2  1.278e+2  2.222e+2  2.165e+2  2.136e+2  2.065e+2  1.711e+2  1.542e+2  2.294e+2  2.219e+2 
10  4.934e+2  5.114e+2  9.010e+2  8.670e+2  8.718e+2  8.409e+2  6.584e+2  6.066e+2  9.240e+2  8.883e+2  
15  1.340e+4  1.634e+4  2.772e+4  2.784e+4  2.700e+4  2.141e+4  1.964e+4  1.835e+4  2.946e+4  2.774e+4  
3  7.191e+0  7.246e+0  7.307e+0  7.282e+0  7.088e+0  7.324e+0  7.317e+0  7.319e+0  7.372e+0  7.369e+0  
5  3.063e+1  3.009e+1  3.135e+1  3.076e+1  2.914e+1  3.125e+1  3.107e+1  3.110e+1  3.134e+1  3.150e+1  
WFG4  8  2.170e+2  1.284e+2  2.381e+2  2.513e+2  2.228e+2  2.553e+2  2.514e+2  2.520e+2  2.539e+2  2.557e+2 
10  8.608e+2  5.129e+2  9.697e+2  1.006e+3  9.242e+2  1.022e+3  1.010e+3  1.013e+3  1.018e+3  1.024e+3  
15  2.577e+4  1.638e+4  2.929e+4  3.249e+4  2.969e+4  3.273e+4  3.268e+4  3.270e+4  3.266e+4  3.274e+4  
3  7.067e+0  7.070e+0  7.090e+0  7.127e+0  6.963e+0  7.168e+0  7.130e+0  7.132e+0  7.148e+0  7.161e+0  
5  3.026e+1  2.940e+1  3.044e+1  3.024e+1  2.940e+1  3.064e+1  3.049e+1  3.053e+1  3.049e+1  3.073e+1  
WFG5  8  2.310e+2  1.816e+2  2.400e+2  2.459e+2  2.146e+2  2.470e+2  2.460e+2  2.459e+2 
Comments
There are no comments yet.