Lazy Greedy Hypervolume Subset Selection from Large Candidate Solution Sets

07/04/2020 ∙ by Weiyu Chen, et al. ∙ 0

Subset selection is a popular topic in recent years and a number of subset selection methods have been proposed. Among those methods, hypervolume subset selection is widely used. Greedy hypervolume subset selection algorithms can achieve good approximations to the optimal subset. However, when the candidate set is large (e.g., an unbounded external archive with a large number of solutions), the algorithm is very time-consuming. In this paper, we propose a new lazy greedy algorithm exploiting the submodular property of the hypervolume indicator. The core idea is to avoid unnecessary hypervolume contribution calculation when finding the solution with the largest contribution. Experimental results show that the proposed algorithm is hundreds of times faster than the original greedy inclusion algorithm and several times faster than the fastest known greedy inclusion algorithm on many test problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Multi-objective optimization aims to optimize some potentially conflicting objectives simultaneously. In the past few decades, evolutionary multi-objective optimization (EMO) algorithms have shown promising performance in solving this kind of problem. Subset selection is a hot topic in the EMO area. It is involved in many phases of EMO algorithms. (i) In each generation, we need to select a pre-specified number of solutions from the current and offspring populations for the next generation. (ii) After the execution of EMO algorithms, the final population is usually presented to the decision-maker. However, if the decision-maker does not want to examine all solutions in the final population, we need to choose only a small number of representative solutions for the decision-makers. (iii) Since many good solutions are discarded during the execution of EMO algorithms [21], we can use an unbounded external archive (UEA) to store all non-dominated solutions examined during the execution of EMO algorithms. In this case, we need to select a subset of the UEA as the final result after their termination [16, 28, 27].

Many subset selection methods have been proposed based on different selection criteria such as hypervolume-based subset selection [1, 5, 19, 15], -indicator-based subset selection [6] and distance-based subset selection [28]. Among these criteria, the hypervolume indicator has been widely used for subset selection [1, 5, 19, 15]. The hypervolume subset selection problem (HSSP) [1] is to select a pre-specified number of solutions from a given candidate solution set to maximize the hypervolume of the selected solutions.

At present, the HSSP can only be efficiently solved in two dimensions. When the dimension is higher than two, the search for the exact optimal subset of the HSSP is NP-hard [26]. Some algorithms have been proposed to approximately solve the HSSP. They can be categorized into the following three classes: (i) hypervolume-based greedy inclusion, (ii) hypervolume-based greedy removal, and (iii) hypervolume-based genetic selection. These algorithms can achieve good approximations to the optimal subset.

However, when the candidate solution set is huge (e.g., tens of thousands of non-dominated solutions in a UEA) and/or the dimension is high (e.g., 10-objective problem), even greedy algorithms need long computation time. Some efficient algorithms (e.g., IHSO*[3] and IWFG[10]) were proposed to quickly determine the solution with the least hypervolume contribution in each iteration of greedy removal algorithms. Guerreiro et al. [15] proposed an algorithm for efficiently updating the hypervolume contribution of each solution, which can reduce the runtime of greedy algorithms for the HSSP in up to four dimensions to polynomial time. Jiang et al. [17] also proposed an efficient mechanism for hypervolume contribution updating in any dimension to decrease the total runtime of a hypervolume-based EMO algorithm.

In this paper, we propose a new greedy inclusion algorithm, which is applicable to large candidate solution sets with many objectives. This algorithm exploits the submodularity [23] of the hypervolume indicator to reduce the unnecessary calculation of hypervolume contributions. Experimental results show that the proposed idea greatly improves the efficiency of greedy subset selection from large candidate solution sets of many-objective problems.

The rest of the paper is organized as follows. Section II describes the hypervolume indicator, hypervolume contribution and some related state-of-the-art algorithms. In section III, we describe our proposed algorithm in detail. Then in section IV, we show our experimental results where the proposed algorithm is compared with some state-of-the-art algorithms. Finally, we draw some conclusions in section V.

Ii Background

Ii-a Hypervolume indicator and hypervolume contribution

The hypervolume indicator [18, 31] is a widely used metric to evaluate the diversity and convergence of a solution set. It is defined as the size of the objective space which is covered by a set of non-dominated solutions and bounded by a reference set . Formally, the hypervolume of a solution set is defined as follows:

(1)

where is the number of dimension and is the attainment function of with respect to the reference set and can be written as

(2)

Calculating the hypervolume of a solution set is a #P-hard problem [9]. A number of algorithms have been proposed to quickly calculate the exact hypervolume such as Hypervolume by Slicing Objectives (HSO) [13, 14], Hypervolume by Overmars and Yap (HOY) [24, 25, 2], and Walking Fish Group (WFG) [30]. Among those algorithms, WFG has been generally accepted as the fastest one. The hypervolume contribution is defined based on the hypervolume indicator. The hypervolume contribution of a point to a set is

(3)

Fig. 1 illustrates the hypervolume of a solution set and the hypervolume contribution of a solution to the solution set in two dimensions. The grey region is the hypervolume of the solution set and the yellow region is the hypervolume contribution of a solution to .

Fig. 1: The hypervolume of the solution set and the hypervolume contribution of to the solution set for a two-objective minimization problem.

Note that calculating the hypervolume contribution based on its definition in (3) requires hypervolume calculation twice, which is not very efficient. Bringmann and Friedrich [7] and Bradstreet et al. [4] proposed a new calculation method to reduce the amount of calculation. The hypervolume contribution is calculated as

(4)

where

(5)
(6)

In this formulation takes the larger value. Compared to the straightforward calculation method in (3), this method is much more efficient. The hypervolume of one solution (i.e., ) can be easily calculated. We can also apply the previous mentioned HSO [13, 14], HOY [24, 25, 2] and WFG [30] to calculate the hypervolume of a reduced solution set (i.e., ).

Let us take Fig. 2 as an example. Suppose we want to calculate the hypervolume contribution of solution to a solution set . First, for each solution in , we replace each of its objective values with the corresponding value from solution if the value of is larger (i.e., we calculate ). This leads to . After the replacement, is dominated by . Thus can be removed from since has no contribution to the hypervolume of . Then, we calculate the hypervolume of (i.e., the area of the gray region in Fig. 2) and subtract it from the hypervolume of solution . The remaining yellow part is the hypervolume contribution of solution .

Fig. 2: Illustration of the efficient hypervolume contribution computation method.

Ii-B Hypervolume subset selection problem

The hypervolume subset selection problem (HSSP) [1] is to select a pre-specified number (say ) of solutions from a given candidate solution set to maximize the hypervolume of the selected solutions (i.e., to select a subset of size from to maximize the hypervolume of ). Its formal definition is as follows.

Given an -point set and an integer , maximize subject to and .

For two-objective problems, HSSP can be solved with time complexity of and [15]. For multi-objective problems with three or more objectives, HSSP is an NP-hard problem [26]

, it is impractical to try to find the exact optimal solution set when the size of the candidate set is large and/or the dimensionality of the objective space is high. In practice, some greedy heuristic algorithms and genetic algorithms are employed to obtain an approximated optimal solution set.

Ii-C Hypervolume-based greedy inclusion

Hypervolume-based greedy inclusion selects solutions from one by one. In each iteration, the solution that has the largest hypervolume contribution to the selected solution set is selected until the required number of solutions are selected. The pseudocode of greedy inclusion is shown in Algorithm 1. The hypervolume-based greedy inclusion algorithm provides a -approximation ( is the natural constant) to HSSP, which means the ratio of the hypervolume of the obtained solution set to the hypervolume of the optimal solution set is not less than [23].

0:   (A set of non-dominated solutions), (Solution subset size)
0:   (The selected subset from )
1:  if  then
2:     
3:  else
4:     
5:     while  do
6:        for each in  do
7:           calculate the hypervolume contribution of to
8:        end for
9:         = solution in with the largest hypervolume contribution
10:        
11:     end while
12:  end if
Algorithm 1 Greedy Inclusion Hypervolume Subset Selection

Ii-D Hypervolume-based greedy removal

In contrast to greedy inclusion algorithms, hypervolume-based greedy removal algorithms discard one solution with the least hypervolume contribution to the current solution set in each iteration. To quickly identify the solution with the least hypervolume contribution, Incremental Hypervolume by Slicing Objectives (IHSO*) [3] and Incremental WFG (IWFG) [10] were proposed. These methods can be used in the greedy removal algorithm. Some experimental results show that these methods can greatly accelerate greedy removal algorithms.

Unlike greedy inclusion, greedy removal has no approximation guarantee. It can obtain an arbitrary bad solution subset [8]. However, in practice, it usually leads to good approximations.

When the required set size is close to the size of (i.e., when the number of solutions to be removed is small), greedy removal algorithms are faster than greedy inclusion algorithms. However, when is relatively small in comparison with the size of , greedy removal algorithms are not efficient since it needs to remove a large number of solutions.

Ii-E Hypervolume contribution update

Hypervolume-based greedy inclusion/removal algorithms can be accelerated by updating hypervolume contributions instead of recalculating them in each iteration (i.e., by utilizing the calculation results in the previous iteration instead of calculating hypervolume contributions in each iteration independently). Guerreiro et al. [6] proposed an algorithm to update the hypervolume contributions efficiently in three and four dimensions. Using their algorithm, the time complexity of hypervolume-based greedy removal in three and four dimensions can be reduced to and respectively.

In a hypervolume-based EMO algorithm called FV-MOEA proposed by Jiang et al.[17], an efficient hypervolume contribution update method applicable to any dimension was proposed. The main idea of their method is that the hypervolume contribution of a solution is only associated with a small number of its neighboring solutions rather than all solutions in the solution set. Let us suppose that one solution have just been removed from the solution set , the main process of the hypervolume contribution update method in [17] is shown in Algorithm 2.

0:   (The hypervolume contribution of each solution in ), (The newly removed solution)
0:   (The updated hypervolume contribution of each solution in )
1:  for each  do
2:     
3:     
4:     
5:  end for
Algorithm 2 Hypervolume Contribution Update

The worse and limit operations in Algorithm 2 are the same as those in Section II-A. Let us explain the basic idea of Algorithm 2 using Fig. 3. When we have a solution set in Fig. 3, the hypervolume contribution of solution is the blue area. When solution is removed, the hypervolume contribution of is updated as follows. The worse solution in line 2 of Algorithm 2 has the maximum objective values of solutions and . In line 3, firstly the limit operator changes solutions , and to , and . Next, the dominated solution is removed. Then the solution set is obtained. In line 4, the hypervolume contribution of is updated by adding the term to its original value (i.e., the blue region in Fig. 3). The added term is the joint hypervolume contribution of solutions and (i.e., the yellow region in Fig. 3). In this way, the hypervolume contribution of each solution is updated.

Since the limit process reduces the number of non-dominated solutions, this updated method greatly improves the speed of hypervolume-based greedy removal algorithms. Algorithm 2 in [17] is the fastest known algorithm to update the hypervolume contribution in any dimension.

Fig. 3: Illustration of the hypervolume contribution update method in FV-MOEA. In this figure, it is assumed that point has just been removed and the hypervolume contribution of point is to be updated.

Iii Lazy Greedy Subset Selection Algorithm

Iii-a Algorithm proposal

In each iteration of hypervolume-based greedy inclusion algorithms, we only need to identify the solution with the largest hypervolume contribution. However, we usually calculate the hypervolume contributions of all solutions. Since it is time-consuming to calculate the hypervolume contribution of each solution, such an algorithm is not efficient. The main idea of the proposed algorithm is to exploit the submodular property of the hypervolume indicator [29]. The definition of a submodular function [23] is as follows.

Given a finite nonempty set , a real-valued function defined on the set of all subsets of that satisfies

is called a submodular function.

The hypervolume indicator is a submodular function[29]. It means that the hypervolume contribution of a solution to the selected solution subset never increases as the number of solutions in increases in a greedy inclusion manner. Hence, instead of recomputing the hypervolume contribution of every candidate solution in each iteration, we can utilize the following lazy evaluation mechanism. We use a list to store the candidate (i.e., unselected) solutions and their tentative HVC (hypervolume contribution) values. The tentative HVC value of each solution is initialized with its hypervolume (i.e., its hypervolume contribution when no solution is selected). The tentative HVC value of each solution is the upper bound of its true hypervolume contribution. For finding the solution with the largest hypervolume contribution from the list, we pick the most promising solution with the largest tentative HVC value, and recalculate its hypervolume contribution to the current solution subset . If the recalculated hypervolume contribution of this solution is still the largest in the list, we do not have to calculate the hypervolume contributions of the other solutions. This is because the hypervolume contribution of each solution never increases through the execution of greedy inclusion. In this case (i.e., if the recalculated hypervolume contribution of the most promising solution is still the largest in the list), we move this solution from the list to the selected solution subset . If the recalculated hypervolume contribution of this solution is not the largest in the list, its tentative HVC value is updated with the recalculated value. Then the most promising solution with the largest tentative HVC value in the list is examined (i.e., its hypervolume contribution is recalculated). This procedure is iterated until the recalculated hypervolume contribution is the largest in the list.

In many cases, the recalculation of the hypervolume contribution of each solution results in the same value as or a slightly smaller value than its tentative HVC value in the list since the inclusion of a single solution to the solution subset changes the hypervolume contributions of only its neighbors in the objective space. Thus, the solution with the largest hypervolume contribution is often found without examining all solutions in the list. By applying this lazy evaluation mechanism, we can avoid a lot of unnecessary calculations in hypervolume-based greedy inclusion algorithms.

Since we always need to find the largest tentative HVC value in , the priority queue implemented by the maximum heap is used to accelerate the procedure. The details of the proposed a lazy greedy inclusion hypervolume-based subset selection (LGI-HSS) algorithm are shown in Algorithm 3.

The idea of the lazy evaluation was proposed by Minoux [22] to accelerate the greedy algorithm for maximizing submodular functions. Then, it was applied to some specific areas such as influence maximization problems [20]. Minoux [22] proved that if the function is non-decreasing submodular and the greedy solution is unique, the solution produced by the lazy greedy algorithm and the original greedy algorithm is identical. Since it is proved that the hypervolume indicator is non-decreasing submodular [29], the LGI-HSS algorithm will obtain the same subset as the original greedy inclusion algorithm if they use the same tie-break mechanism.

0:   (A set of non-dominated solutions), (Solution subset size)
0:   (The selected subset from )
1:  if  then
2:     
3:  else
4:     ,
5:     for each in  do
6:        insert into
7:     end for
8:     while  do
9:        while  do
10:            = solution with the largest HVC in
11:           update the HVC of to
12:           if  has the largest HVC in  then
13:              
14:              
15:              break
16:           end if
17:        end while
18:     end while
19:  end if
Algorithm 3 Lazy Greedy Inclusion Hypervolume Subset Selection (LGI-HSS)

Iii-B An illustrative example

Let us explain the proposed algorithm using a simple example. Fig. 4 shows the changes of the hypervolume contribution in list . The values in the parentheses are the stored HVC value of each solution to the selected subset. For illustration purposes, the solutions in the list are sorted by the stored HVC values. However, in the actual implementation of the algorithm, the sorting is not necessarily needed (especially when the number of candidate solutions is very large). This is because our algorithm only needs to find the most promising candidate solution with the largest HVC value in the list.

Fig. 4 (i) shows the initial list including five solutions , , , and . The current solution subset is empty. In Fig. 4 (i), solution has the largest HVC value. Since the initial HVC value of each solution is the true hypervolume contribution to the current empty solution subset , no recalculation is needed. Solution is moved from the list to the solution subset.

In Fig. 4 (ii), solution has the largest HVC value in the list after solution is moved. Thus, the hypervolume contribution of is to be recalculated. We assume that the recalculated HVC value is 4 as shown in Fig. 4 (iii).

Fig. 4 (iii) shows the list after the recalculation. Since the updated HVC value of is not the largest, we need to choose solution which has the largest HVC value in the list and recalculate its hypervolume contribution. We assume that the recalculated HVC value is 6 as shown in Fig. 4 (iv).

Fig. 4 (iv) shows the list after the recalculation. Since the recalculated HVC value of solution is still the largest in the list, solution is moved from the list to the solution subset . Fig. 4 (v) shows the list after the removal of . Solution with the largest HVC value is examined.

In this example, when we select the second solution from the remaining four candidates (, , and ), we evaluate the hypervolume contributions of only the two solutions ( and ). In the standard greedy inclusion algorithm, all four candidates are examined. In this manner, the proposed algorithm decreases the computation time of the standard greedy inclusion algorithm.

Fig. 4: Illustration of the proposed algorithm. The values in the parentheses are the stored tentative HVC values.

Iv Experiments

Iv-a Algorithms for comparison

The proposed LGI-HSS algorithm is compared with the following two algorithms:

  1. Standard greedy inclusion hypervolume subset selection (GI-HSS): This is the greedy inclusion algorithm described in Section II-C. When calculating the hypervolume contribution, the effective method(i.e., formula (4)-(6)) described in Section II-A is employed.

  2. Greedy inclusion hypervolume subset selection with hypervolume contribution updating (UGI-HSS): The hypervolume contribution updating method proposed in FV-MOEA[17] (Algorithm 2) is used. Since Algorithm 2 is for greedy removal, it is changed for greedy inclusion here. It is the fastest known greedy inclusion algorithm applicable to any dimension.

Since our main focus is the selection of a solution subset from an unbounded external archive (i.e., since the number of solutions to be selected is much smaller than the number of candidate solutions: in HSSP), greedy removal is not efficient. Hence, some algorithms only suitable for greedy removal (e.g., greedy removal using IHSO* [3] or IWFG [10] to identify the least contribution solution) are not compared in this paper.

Iv-B Test Problems and Candidate Solutions

To examine the performance of three subset selection algorithms, we choose three representative test problems with different Pareto front (PF) shapes:

  1. Spherical front: Solutions on the true PF of the DTLZ2 test problem [12].

  2. Discontinuous front: Solutions on the true PF of the DTLZ7 test problem [12].

  3. Inverted spherical front: Solutions on the true PF of the Inverted DTLZ2 (I-DTLZ2) problem [11].

For each test problem, we use three problem instances with 5, 8 and 10 objectives (i.e., solution subset selection is performed in five-, eight- and ten-dimensional objective spaces). Four different settings of the candidate solution set size are examined: 5000, 10000, 15000 and 20000. We first uniformly generate 100,000 solutions on the PF. In each run of a solution subset selection algorithm, a required number of candidate solutions (i.e., 5000, 10000, 15000 or 20000 solutions) are randomly selected from the generated 100,000 solutions for each problem instance. Computational experiments are performed five times for each setting of the candidate solution set size for each problem instance. The number of solutions to be selected is specified as 100. Thus our problem is to select 100 solutions from 5000, 10000, 15000 or 20000 candidate solutions to maximize the hypervolume of the selected solution.

Iv-C Experimental settings

In each subset selection algorithm, the reference point for hypervolume (contribution) calculation is set to for all test problems independent of the number of objectives. We use the WFG algorithm [30] for hypervolume calculation in each solution subset selection algorithm. The code of the WFG algorithm is available from http://www.wfg.csse.uwa.edu.au/hypervolume/#code.

All subset selection algorithms are coded by MatlabR2018a. The computation time of each run is measured on an Intel Core i5-7200U CPU with 4GB of RAM, running in Windows 10.

Iv-D Experimental results

The results of the average computation time of each algorithm on the DTLZ2, DTLZ7 and I-DTLZ2 test problems are summarized in Figs. 5-7, respectively. Compared with the standard GI-HSS algorithm, we can see that our LGI-HSS algorithm can reduce the computation time by 91% to 99%. By the increase in the number of objectives (i.e., by the increase in the dimensionality of the objective space), the advantage of LGI-HSS over the other algorithms becomes larger. Among the three test problems in Figs. 5-7, all the three algorithms are fast on the I-DTLZ2 problem and slow on the DTLZ2 problem.

Even when we compare our LGI-HSS algorithm with the fastest known greedy inclusion algorithm UDI-HSS , LGI-HSS is much faster. On DTLZ2 in Fig. 5, LGI-HSS spent 74% to 96% less computation time than UGI-HSS. On DTLZ7 in Fig. 6, LGI-HSS spent 47% to 76% less computation time than UGI-HSS. On the five-objective I-DTLZ2 problem instance in Fig. 7 (a), there is no large difference in the average computation time between the two algorithms (the average computation time of LGI-HSS is less than that of UGI-HSS by 34%-58%). However, by increasing the number of objectives in Fig. 7, the difference in the average computation time between the two algorithms becomes larger for I-DTLZ2.

From Figs. 5-7, we can also observe that the average computation time of each algorithm did not severely increase when the number of objectives increases (i.e., when the dimensionality of the objective space increases) for DTLZ7 in Fig. 6 and I-DTLZ2 in Fig. 7. In some cases, the average computation time of LGI-HSS decreased when the number of objectives increases (e.g., on I-DTLZ2 by LGI-HSS in Fig. 7). This issue needs to be further addressed in our future study.

(a) Five-objective DTLZ2 (5D spherical front).
(b) Eight-objective DTLZ2 (8D spherical front).
(c) Ten-objective DTLZ2 (10D spherical front).
Fig. 5: Average computation time on DTLZ2 with the spherical PF. The time axis is log scaled.
(a) Five-objective DTLZ7 (5D discontinuous front).
(b) Eight-objective DTLZ7 (8D discontinuous front).
(c) Ten-objective DTLZ7 (10D discontinuous front).
Fig. 6: Average computation time on DTLZ7 with the discontinuous PF. The time axis is log scaled.
(a) Five-objective I-DTLZ2 (5D inverted spherical front).
(b) Eight-objective I-DTLZ2 (8D inverted spherical front).
(c) Ten-objective I-DTLZ2 (10D inverted spherical front).
Fig. 7: Average computation time on DTLZ7 with the discontinuous PF. The time axis is log scaled.

V Concluding Remarks

In this paper, we proposed an efficient greedy inclusion algorithm (LGI-HSS) to select a small number of solutions from a large candidate solution set for hypervolume maximization. The proposed LGI-HSS algorithm is based on the submodular property of the hypervolume indicator. The core idea of LGI-HSS is to use the submodular property to avoid unnecessary hypervolume contribution calculation. The same solution subset selection result is obtained by LGI-HSS as the standard greedy inclusion algorithm since our algorithm does not change the basic framework of greedy inclusion. Our experimental results on three test problems (DTLZ2, DTLZ7 and Inverted DTLZ2) with 5, 8 and 10 objectives showed that the proposed LGI-HSS algorithm is much more efficient than the standard greedy inclusion algorithm and the state-of-the-art fast greedy inclusion algorithm.

Our experimental results clearly showed that the idea of lazy evaluation based on the submodular property drastically decreased the computation time of hypervolume-based greedy subset selection. One interesting future research topic is to examine the applicability of this idea to other performance indicators. In this research direction, the relation between the submodularity and the Pareto compliance may need to be clearly explained. Another interesting research direction is to examine the relation between the efficiency of hypervolume-based subset selection algorithms and the properties of multi-objective optimization problems. It needs to be further explained why the increase in the number of objectives did not increase the computation time of some subset selection algorithms for some problems whereas it severely increased for other problems.

References

  • [1] J. Bader and E. Zitzler (2011) HypE: An algorithm for fast hypervolume-based many-objective optimization. Evolutionary Computation 19 (1), pp. 45–76. Cited by: §I, §II-B.
  • [2] N. Beume (2009) S-metric calculation by considering dominated hypervolume as klee’s measure problem. Evolutionary Computation 17 (4), pp. 477–492. Cited by: §II-A, §II-A.
  • [3] L. Bradstreet, L. While, and L. Barone (2008) A fast incremental hypervolume algorithm. IEEE Transactions on Evolutionary Computation 12 (6), pp. 714–723. Cited by: §I, §II-D, §IV-A.
  • [4] L. Bradstreet, L. While, and L. Barone (2009) A new way of calculating exact exclusive hypervolumes. The University of Western Australia, School of Computer Science & Software Engineering, Technical Report UWA-CSSE-09–002. Cited by: §II-A.
  • [5] K. Bringmann, T. Friedrich, and P. Klitzke (2014) Generic postprocessing via subset selection for hypervolume and epsilon-indicator. In International Conference on Parallel Problem Solving from Nature, pp. 518–527. Cited by: §I.
  • [6] K. Bringmann, T. Friedrich, and P. Klitzke (2014) Two-dimensional subset selection for hypervolume and epsilon-indicator. In Proceedings of the 2014 Annual Conference on Genetic and Evolutionary Computation, pp. 589–596. Cited by: §I, §II-E.
  • [7] K. Bringmann and T. Friedrich (2009) Approximating the least hypervolume contributor: NP-hard in general, but fast in practice. In International Conference on Evolutionary Multi-Criterion Optimization, pp. 6–20. Cited by: §II-A.
  • [8] K. Bringmann and T. Friedrich (2010) An efficient algorithm for computing hypervolume contributions. Evolutionary Computation 18 (3), pp. 383–402. Cited by: §II-D.
  • [9] K. Bringmann and T. Friedrich (2010) Approximating the volume of unions and intersections of high-dimensional geometric objects. Computational Geometry 43 (6-7), pp. 601–610. Cited by: §II-A.
  • [10] W. Cox and L. While (2016) Improving the IWFG algorithm for calculating incremental hypervolume. In 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 3969–3976. Cited by: §I, §II-D, §IV-A.
  • [11] K. Deb and H. Jain (2013) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints. IEEE Transactions on Evolutionary Computation 18 (4), pp. 577–601. Cited by: item 3.
  • [12] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler (2002) Scalable multi-objective optimization test problems. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), Vol. 1, pp. 825–830. Cited by: item 1, item 2.
  • [13] J. J. Durillo, A. J. Nebro, and E. Alba (2010) The jmetal framework for multi-objective optimization: design and architecture. In IEEE Congress on Evolutionary Computation, pp. 1–8. Cited by: §II-A, §II-A.
  • [14] J. J. Durillo and A. J. Nebro (2011) JMetal: a java framework for multi-objective optimization. Advances in Engineering Software 42 (10), pp. 760–771. Cited by: §II-A, §II-A.
  • [15] A. P. Guerreiro and C. M. Fonseca (2017) Computing and updating hypervolume contributions in up to four dimensions. IEEE Transactions on Evolutionary Computation 22 (3), pp. 449–463. Cited by: §I, §I, §II-B.
  • [16] H. Ishibuchi, Y. Setoguchi, H. Masuda, and Y. Nojima (2016) How to compare many-objective algorithms under different settings of population and archive sizes. In 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 1149–1156. Cited by: §I.
  • [17] S. Jiang, J. Zhang, Y. Ong, A. N. Zhang, and P. S. Tan (2014)

    A simple and fast hypervolume indicator-based multiobjective evolutionary algorithm

    .
    IEEE Transactions on Cybernetics 45 (10), pp. 2202–2213. Cited by: §I, §II-E, §II-E, item 2.
  • [18] J. D. Knowles, D. W. Corne, and M. Fleischer (2003) Bounded archiving using the lebesgue measure. In The 2003 Congress on Evolutionary Computation, 2003. CEC’03., Vol. 4, pp. 2490–2497. Cited by: §II-A.
  • [19] T. Kuhn, C. M. Fonseca, L. Paquete, S. Ruzika, M. M. Duarte, and J. R. Figueira (2016) Hypervolume subset selection in two dimensions: formulations and algorithms. Evolutionary Computation 24 (3), pp. 411–425. Cited by: §I.
  • [20] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance (2007) Cost-effective outbreak detection in networks. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 420–429. Cited by: §III-A.
  • [21] M. Li and X. Yao (2019) An empirical investigation of the optimality and monotonicity properties of multiobjective archiving methods. In International Conference on Evolutionary Multi-Criterion Optimization, pp. 15–26. Cited by: §I.
  • [22] M. Minoux (1978) Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, pp. 234–243. Cited by: §III-A.
  • [23] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher (1978) An analysis of approximations for maximizing submodular set functions-I. Mathematical Programming 14 (1), pp. 265–294. Cited by: §I, §II-C, §III-A.
  • [24] M. H. Overmars and C. Yap (1988-10) New upper bounds in klee’s measure problem. In [Proceedings 1988] 29th Annual Symposium on Foundations of Computer Science, Vol. , pp. 550–556. External Links: Document, ISSN null Cited by: §II-A, §II-A.
  • [25] M. H. Overmars and C. Yap (1991) New upper bounds in klee’s measure problem. SIAM Journal on Computing 20 (6), pp. 1034–1045. Cited by: §II-A, §II-A.
  • [26] G. Rote, K. Buchin, K. Bringmann, S. Cabello, and M. Emmerich (2016) Selecting k points that maximize the convex hull volume. In Proceedings of the 19th Japan Conference on Discrete and Computational Geometry, Graphs, and Games, pp. 58–60. Cited by: §I, §II-B.
  • [27] H. K. Singh, K. S. Bhattacharjee, and T. Ray (2018) Distance-based subset selection for benchmarking in evolutionary multi/many-objective optimization. IEEE Transactions on Evolutionary Computation 23 (5), pp. 904–912. Cited by: §I.
  • [28] R. Tanabe, H. Ishibuchi, and A. Oyama (2017) Benchmarking multi-and many-objective evolutionary algorithms under two optimization scenarios. IEEE Access 5, pp. 19597–19619. Cited by: §I, §I.
  • [29] T. Ulrich and L. Thiele (2012) Bounding the effectiveness of hypervolume-based (+ )-archiving algorithms. In International Conference on Learning and Intelligent Optimization, pp. 235–249. Cited by: §III-A, §III-A, §III-A.
  • [30] L. While, L. Bradstreet, and L. Barone (2011) A fast way of calculating exact hypervolumes. IEEE Transactions on Evolutionary Computation 16 (1), pp. 86–95. Cited by: §II-A, §II-A, §IV-C.
  • [31] E. Zitzler and L. Thiele (1998) Multiobjective optimization using evolutionary algorithms-a comparative case study. In International Conference on Parallel Problem Solving from Nature, pp. 292–301. Cited by: §II-A.