A Two-phase Framework with a Bézier Simplex-based Interpolation Method for Computationally Expensive Multi-objective Optimization

03/29/2022
by   Ryoji Tanabe, et al.
University of Tsukuba
FUJITSU
0

This paper proposes a two-phase framework with a Bézier simplex-based interpolation method (TPB) for computationally expensive multi-objective optimization. The first phase in TPB aims to approximate a few Pareto optimal solutions by optimizing a sequence of single-objective scalar problems. The first phase in TPB can fully exploit a state-of-the-art single-objective derivative-free optimizer. The second phase in TPB utilizes a Bézier simplex model to interpolate the solutions obtained in the first phase. The second phase in TPB fully exploits the fact that a Bézier simplex model can approximate the Pareto optimal solution set by exploiting its simplex structure when a given problem is simplicial. We investigate the performance of TPB on the 55 bi-objective BBOB problems. The results show that TPB performs significantly better than HMO-CMA-ES and some state-of-the-art meta-model-based optimizers.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/19/2021

PAMELI: A Meta-Algorithm for Computationally Expensive Multi-Objective Optimization Problems

We present an algorithm for multi-objective optimization of computationa...
02/03/2014

Multidiscipinary Optimization For Gas Turbines Design

State-of-the-art aeronautic Low Pressure gas Turbines (LPTs) are already...
09/12/2021

Batched Data-Driven Evolutionary Multi-Objective Optimization Based on Manifold Interpolation

Multi-objective optimization problems are ubiquitous in real-world scien...
05/28/2022

Data-Driven Evolutionary Multi-Objective Optimization Based on Multiple-Gradient Descent for Disconnected Pareto Fronts

Data-driven evolutionary multi-objective optimization (EMO) has been rec...
12/14/2021

MMO: Meta Multi-Objectivization for Software Configuration Tuning

Software configuration tuning is essential for optimizing a given perfor...
04/14/2022

Exact and approximate determination of the Pareto set using minimal correction subsets

Recently, it has been shown that the enumeration of Minimal Correction S...
04/18/2019

Uncrowded Hypervolume Improvement: COMO-CMA-ES and the Sofomore framework

We present a framework to build a multiobjective algorithm from single-o...

Code Repositories

tpb

A Two-phase Framework with a Bezier Simplex-based Interpolation Method for Computationally Expensive Multi-objective Optimization


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

General context. This paper considers computationally expensive multi-objective black-box numerical optimization. Some real-world optimization problems require computationally expensive simulation to evaluate the solution (e.g., (Daniels et al., 2018; Yang et al., 2019)). In this case, only a limited budget of function evaluations is available for multi-objective optimization. Instead of general evolutionary multi-objective optimization (EMO) algorithms (e.g., NSGA-II (Deb et al., 2002) and MOEA/D (Zhang and Li, 2007)), meta-model-based approaches (Tabatabaei et al., 2015; Chugh et al., 2019) have been generally used for computationally expensive multi-objective optimization.

Some mathematical derivative-free optimizers (e.g., NEWUOA (Powell, 2008), BOBYQA (Powell, 2009), and SLSQP (Kraft, 1988)) have shown their effectiveness for computationally expensive single-objective black-box numerical optimization. For example, Hansen et al. (Hansen et al., 2010) investigated the performance of 31 optimizers on the noiseless BBOB function set (Hansen et al., 2009). Their results showed that NEWUOA achieves the best performance in the 31 optimizers for a small number of function evaluations. The results in (Posík and Huyer, 2012; Rios and Sahinidis, 2013) also reported the excellent convergence performance of NEWUOA. The results in (Hansen, 2019) demonstrated that SLSQP can quickly find the optimal solution on some unimodal functions. In (Bajer et al., 2019), Bajer et al. showed that BOBYQA outperforms some meta-model-based optimizers including SMAC (Hutter et al., [n. d.]) and lmm-CMA (Bouzarkouna et al., 2011).

Motivation. Let be a scalarizing function that maps an

-dimensional objective vector to a scalar value. Let also

be a set of uniformly distributed weight vectors. Under certain conditions, the optimal solution of a single-objective scalar optimization problem can be a weakly Pareto optimal solution (see Chapter 3.5 in (Miettinen, 1998)). Therefore, weakly Pareto optimal solutions can potentially be obtained by solving a sequence of single-objective scalar optimization problems . Any single-objective optimizer can be applied to the scalar optimization problems in principle. When the number of function evaluations is limited, a mathematical derivative-free optimizer is likely to be suitable for this purpose based on the above review.

Actually, the first warm start phase in HMO-CMA-ES (Loshchilov and Glasmachers, 2016) adopts this idea. HMO-CMA-ES was designed to achieve good anytime performance for bi-objective optimization in terms of the hypervolume indicator (Zitzler and Thiele, 1998). HMO-CMA-ES is a hybrid multi-objective optimizer that consists of four phases. The first out of the four phases in HMO-CMA-ES applies BOBYQA to a sequence of scalar optimization problems for only the first function evaluations, where is the number of variables. Let be the set of all solutions found so far by BOBYQA. At the end of the first phase, HMO-CMA-ES selects five solutions from by applying environmental selection in SMS-EMOA (Beume et al., 2007). Then, the second phase in HMO-CMA-ES performs a steady-state MO-CMA-ES (Igel et al., 2006) with the initial population of the five solutions. Brockhoff et al. (Brockhoff et al., 2021) showed that HMO-CMA-ES performs significantly better than some multi-objective optimizers for the first function evaluations, including NSGA-II (Deb et al., 2002), COMO-CMA-ES (Touré et al., 2019), and DMS (Custódio et al., 2011). Thus, their results indicate the effectiveness of mathematical derivative-free approaches to solving a scalar problem for computationally expensive multi-objective optimization.

One drawback of the above-discussed scalar optimization approach is that it can achieve only solutions that are sparsely distributed in the objective space, even in the best case. Since only a limited number of function evaluations are available for computationally expensive optimization, needs to be as small as possible. Due to the small value of , the above-discussed scalar optimization approach cannot obtain a set of non-dominated solutions that cover the entire Pareto front in the objective space.

However, we believe that the issue of the above-discussed scalar optimization approach can be addressed by using a solution interpolation method. Let be a set of solutions obtained by optimizing a sequence of single-objective scalar optimization problems . Densely distributed solutions in the objective space can potentially be obtained by interpolating the sparsely distributed solutions in . Some solution interpolation methods have been proposed in the literature (see Section 3). Unfortunately, existing methods were not designed for interpolating only a few (say ) solutions. In addition, we are particularly interested in optimization with a small budget of function evaluations.

The Bézier simplex is an extended version of the Bézier curve (Farin, 2002) to higher dimensions. For a certain class of problems, the Bézier simplex has a capability to interpolate solutions, approximating the entire set of Pareto optimal solutions. More precisely, Hamada et al. (Hamada et al., 2020) showed that the set of Pareto optimal solutions is homeomorphic to an -dimensional simplex under certain conditions. In such a case, Kobayashi et al. (Kobayashi et al., 2019) proved that a Bézier simplex model can approximate the Pareto optimal solution set. They also proposed an algorithm for fitting a Bézier simplex by extending the Bézier curve fitting (Borges and Pastva, 2002). Their results in (Kobayashi et al., 2019) demonstrated that it achieved an accurate approximation with a small number of solutions. Thus, we expect that the Bézier simplex model can effectively interpolate the sparsely distributed solutions.

Contribution. Motivated by the above discussion, this paper proposes a two-phase framework with a Bézier simplex-based interpolation method (TPB) for computationally expensive multi-objective black-box optimization. The first phase performs a mathematical derivative-free optimizer on a sequence of single-objective scalar optimization problems . The second phase fits a Bézier simplex model to the solutions obtained in the first phase. Then, TPB samples interpolated solutions from the Bézier simplex model. We investigate the performance of TPB on the bi-objective BBOB function set (Brockhoff et al., ress). We also compare TPB with HMO-CMA-ES and state-of-the-art meta-model-based multi-objective optimizers.

Outline. Section 2 provides some preliminaries. Section 3 reviews related work. Section 4 introduces TPB. Section 5 describes our experimental setting. Section 6 shows analysis results. Section 7 concludes this paper.

Code availability. The code of TPB is available at https://github.com/ryojitanabe/tpb.

2. Preliminaries

2.1. Multi-objective optimization

We tackle a multi-objective minimization of a vector-valued objective function , where is the search space. Note that is the dimension of the objective space, and is the dimension of the search space. Let , where is called the -th objective function. The image of , in our case, is called the objective space. Throughout of this paper, we consider a box constrained search space, i.e., , where and are the lower and upper bounds of the -th coordinate of the search space.

Our objective is to find a finite set of solutions that approximates the Pareto front , which is defined as follows:

(1)

where (for ) represents the Pareto dominance relation ( if holds for all and holds for some , and otherwise). A solution is said to a Pareto optimal solution if no solution in can dominate . The Pareto optimal solution set is the set of all . The objective is informally stated as to find a set of approximate Pareto optimal solutions that are well-distributed on . The quality of is often measured by a quality indicator such as the hypervolume indicator (Zitzler and Thiele, 1998).

In this paper, we suppose that we can access the objective function only through an expensive black-box query . Its indication is summarized below. (1) The Jacobian and higher order information of is unavailable (derivative-free optimization). (2) The characteristic constants of such as the Lipschitz constant are unavailable (black-box optimization). (3) Evaluation of is computationally expensive (expensive optimization). (4) Each objective function value cannot be obtained with a lower computational cost. Therefore, the cost of the optimization process is measured by the number of -calls. We assume that it is limited up to .

2.2. Simplicial problem

Kobayashi et al. (Kobayashi et al., 2019) defined a class of multi-objective optimization problems whose Pareto optimal solution set and Pareto front can be seen topologically as a simplex. Let be a positive integer. The standard ()-simplex is denoted by

Let be the index set on the objective functions. For each non-empty subset , we define

and

Definition 2.1 ().

For a given objective function , the multi-objective optimization problem of minimizing is simplicial if there exists a map such that for each non-empty subset , its restriction gives the following homeomorphisms:

(2)
(3)

2.3. Bézier simplex fitting

We denote the set of non-negative integers (including zero) by . Let be an arbitrary integer in , and

An -Bézier simplex of degree is a mapping determined by control points as follows:

(4)

where is a multinomial coefficient, and is a monomial for each and . The following theorem ensures that the Pareto optimal solution set and Pareto front of any simplicial problem can be approximated with arbitrary accuracy by a Bézier simplex of an appropriate degree:

Theorem 2.2 (Kobayashi et al. (Kobayashi et al., 2019, Theorem 1)).

Let be a continuous map. There is an infinite sequence of Bézier simplices such that

With this result, Kobayashi et al. (Kobayashi et al., 2019) proposed the Bézier simplex fitting method to describe the Pareto optimal solution set of a simplicial problem. Suppose that we have a set of approximate Pareto optimal solutions , where and are the

-th approximate Pareto optimal solution and its corresponding parameter, respectively. The Bézier simplex fitting method adjusts the control points by minimizing the ordinary least squares (OLS) loss function:

. Since the OLS loss function is a convex quadratic function with respect to , its minimization problem can be solved efficiently, for example, by solving a normal equation.

3. Related work

Two-phase approaches have been well studied in the context of multi-objective optimization (e.g., (Hamada et al., 2008; Hirano and Yoshikawa, 2013; Hu et al., 2017; Regis, 2021)). TPLSPLS (Paquete and Stützle, 2003; Dubois-Lacoste et al., 2013)

is one of the most representative two-phase approaches for combinatorial optimization. Roughly speaking, the first phase in multi-objective two-phase approaches aims to find well-converged solutions to the Pareto front. Then, the second phase aims to generate a set of well-diversified solutions based on the solutions obtained in the first phase. Generally, two-phase approaches can produce only a poor-quality solution set when it stops before the maximum budget of function evaluations

(Dubois-Lacoste et al., 2011). Thus, the anytime performance of most two-phase approaches is poor. Here, we say that the anytime performance of an optimizer is good if it can obtain a well-approximated solution set at any time during the search process. The substantial difference between TPB and existing two-phase approaches is that the second phase in TPB incorporates solutions by utilizing a Bézier simplex model, which fully exploits the theoretical property of the Pareto optimal solution set. In addition, unlike TPB, all two-phase approaches but (Regis, 2021) were designed for non-expensive optimization. Here, the study (Regis, 2021) proposed a surrogate model-based approach for constrained bi-objective optimization.

Some methods for interpolating objective vectors (not solutions) obtained by an EMO algorithm have been proposed in the literature (Hartikainen et al., 2011, 2012; Bhattacharjee et al., 2017). A decision-maker can determine her/his preference by visually examining interpolated objective vectors. One of the most representative approaches is the PAINT method (Hartikainen et al., 2012), which interpolates an objective vector set using the Delaunay triangulation. Note that these interpolation methods cannot provide an inverse mapping from the objective space to the search space. In contrast, the second phase in TPB aims to interpolate solutions (not objective vectors) to approximate the Pareto front.

The Pareto estimation method

(Giagkiozis and Fleming, 2014)

aims to increase the number of non-dominated solutions obtained by an EMO algorithm. The Pareto estimation method uses a neural network model to find an inverse mapping from the objective space to the search space. GAN-LMEF

(Wang et al., ress) interpolates randomly generated solutions on the manifold by using dimensionality reduction, clustering, and GAN (Goodfellow et al., 2014). These two methods aim to interpolate a sufficiently large number of solutions. In contrast, the second phase in TPB aims to interpolate only solutions (i.e., in this study) by utilizing a Bézier simplex model.

Some EMO algorithms (e.g., RM-MEDA (Zhang et al., 2008)) exploit the simplex structure of the Pareto optimal solution set. BezEA (Maree et al., 2020) evolves a control point set for a Bézier curve to generate a high-quality solution set in terms of the “smoothness” measure, which was proposed in (Maree et al., 2020). Unlike these EMO algorithms, TPB exploits the property of the Pareto optimal solution set by using the theoretically well-founded Bézier simplex. No previous study also proposed an EMO algorithm based on the simplex structure of the Pareto optimal solution set for computationally expensive optimization.

4. Proposed framework

This section describes the proposed TPB, which consists of the first phase (Section 4.1) and the second phase (Section 4.2). Let be a set of weight vectors. We assume that , which is the minimum value of .

In the first phase (Section 4.1), TPB aims to approximate Pareto optimal solutions by applying a single-objective optimizer to scalar optimization problems . Let be a set of the best solutions for the scalar problems obtained in the first phase. Here, the -th solution in should correspond to the -th weight vector in . Ideally, the first phase should find such that in minimizes its corresponding scalar problem . Let budget be the maximum budget of function evaluations for the whole process of TPB. The first phase in TPB can use budget function evaluations in the maximum case, where is a control parameter of TPB. For example, when budget and , function evaluations can be used in the first phase in the maximum case. Note that some optimizers have their own stopping criteria in addition to the maximum number of function evaluations. For example, BOBYQA stops when reaching its minimum trust region radius. Thus, it is possible that the first phase in TPB does not use all budget function evaluations.

The second phase in TPB (Section 4.2) aims to interpolate the solutions in by using a Bézier simplex-based interpolation method (Kobayashi et al., 2019). The Bézier simplex model can approximate the Pareto optimal solution set (see Section 2.3). In addition, the Bézier simplex-based interpolation can be done by minimizing the OLS function, which is a convex quadratic function.

Below, Sections 4.1 and 4.2 describe the first and second phases in TPB, respectively. Section 4.3 discusses the property of TPB.

4.1. First phase

Algorithm 1 shows the first phase in TPB. In line 1 in Algorithm 1, budget is the maximum budget of function evaluations used in an optimizer on each scalar problem. In line 2 in Algorithm 1, is an archive that maintains all solutions found so far.

As in D-TPLS (Paquete and Stützle, 2003), the first phase in TPB first performs single-objective optimization of each objective function (lines 3–6 in Algorithm 1). This aims to approximate Pareto optimal solutions that minimize the objective functions, respectively. Unlike D-TPLS, the solutions are mainly used for the normalization procedure in the next step (lines 7–15 in Algorithm 1). TPB sets the initial solution to the center of the search space (line 3 in Algorithm 1), where the -th element in is . Then, TPB applies a pre-defined single-objective optimizer (optimizer) to each objective function (line 5 in Algorithm 1). Here, is a set of all solutions found by optimizer.

Next, the first phase in TPB aims to solve the remaining scalar problem(s). Since TPB has solved the objective functions, TPB here does not consider the extreme weight vectors (line 7 in Algorithm 1). TPB sets the approximated ideal point and the approximated nadir point based on (line 8 in in Algorithm 1). Note that this step always normalizes the objective vector as follows: . The initial solution is set to the best solution in in terms of a given scalarizing function (line 9 in Algorithm 1).

Finally, we set , where is the best-so-far solution of the -th scalar problem (lines 12–15 in Algorithm 1). The second phase in TPB interpolates the solutions in .

1 budget budget ;
2 ;
3 ;
4 for  do
5       Run optimizer on with and budget;
6       ;
7      
8for  do
9       Set and based on ;
10       ;
11       Run optimizer on with and budget;
12       ;
13      
14;
15 for  do
16       ;
17       ;
Algorithm 1 The first phase in TPB

4.2. Second phase

Let budget be the number of function evaluations used in the first phase, where the maximum budget is budget . The second phase in TPB uses the remaining budget budget budget function evaluations.

Let be a set of parameter vectors, where . TPB treats the -th weight vector in as the -th parameter in . Thus, is identical to . With and , we next train a Bézier simplex model that takes a parameter as an input and outputs a minimizer of the corresponding scalarizing function. Specifically, TPB fits a Bézier simplex model to with by solving the OLS loss minimization problem:

(5)

where is the -th solution in .

Let be a set of budget parameter vectors. After fitting the Bézier simplex model in (5), TPB generates budget solutions by using and . It is expected that the budget solutions complement the solutions in . Any method can be used to generate , e.g., uniform random generation. The decision maker’s preference can also be incorporated into . In this study, we generate budget parameters in so that they are equally spaced. First, we equally generate budget parameters on . Then, we removed the extreme parameters and from . Since the first phase has found the extreme solutions, we do not need to re-generate them. For example, when budget, we can obtain the following parameters: and .

4.3. Discussion

4.3.1. Control parameters for TPB

The numerical control parameters for TPB include the number of weight vectors , the degree in a Bézier simplex model , and the budget ratio . Clearly, the best setting of and depends on the shape of the Pareto optimal solution set. We believe that must be more than or equal to so that a resulting Bézier simplex model can characterize the shape of the Pareto optimal solution set. This is because a Bézier simplex model fitting needs at least one non-extreme solution to handle the nonlinear Pareto optimal solution set. Similarly, must be more than or equal to to handle the nonlinearity of the Pareto optimal solution set. The best setting of depends on the difficulty in solving scalar problems. If scalar problems are easy, should be a small value. Otherwise, the first phase in TPB can waste computational resources. However, as described at the beginning of Section 4, some modern optimizers (e.g., BOBYQA) automatically terminate the search. Thus, we believe that can be set to a relatively high value (e.g., ).

The categorical control parameters for TPB include the scalarizing function and the single-objective optimizer optimizer. Although TPB can use any (e.g., the weighted Tchebycheff function), we set to the weighted sum function in this study. Since is the simplest scalarizing function, is a reasonable first choice. A mathematical derivative-free optimizer is suitable for optimizer for the reason discussed in Section 1. We set optimizer to BOBYQA, which is a state-of-the-art mathematical derivative-free optimizer for box-constrained optimization. The first phase in HMO-CMA-ES also adopts and BOBYQA.

4.3.2. Advantages and disadvantages of TPB

One advantage of TPB is that it can use a state-of-the-art single-objective optimizer without any change. In contrast to meta-model-based optimizers, TPB does not require computationally expensive operations if a single-objective optimizer is computationally cheap. TPB can also exploit the structure of the Pareto solution set by using the theoretically well-understood Bézier simplex.

As described in Section 3, the anytime performance of two-phase approaches is generally poor. TPB has the same disadvantage. The second phase in TPB cannot interpolate solutions when a given problem is not simplicial (see Section 2.2). This is because a Bézier simplex model can represent only a standard -simplex. Fortunately, for a lot of practical real-world problems, scatter plots of approximate Pareto optimal solutions imply those problems are simplicial (e.g., (Shoval et al., 2012; Mastroddi and Gemma, 2013; Vrugt et al., 2003; Tanabe and Ishibuchi, 2020)).

5. Experimental setup

We investigated the performance of the proposed TPB using COCO

(Hansen et al., 2021), which is the standard benchmarking platform in the GECCO community. We used the 55 bi-objective BBOB problems () (Brockhoff et al., ress) provided by COCO. The first and second objective functions in a bi-objective BBOB problem are selected from the 24 single-objective noiseless BBOB functions (Hansen et al., 2009). Although the DTLZ (Deb et al., 2005) and WFG (Huband et al., 2006) problems are the most commonly-used test problems, many previous studies (e.g., (Brockhoff et al., 2015; Ishibuchi et al., 2017; Chen et al., 2020)) pointed out that they have some serious issues, including the regularity of the Pareto front and the existence of distance and position variables. In contrast, the bi-objective BBOB problems address all these issues. Each bi-objective BBOB problem consists of 15 instances in COCO. A single run of a multi-objective optimizer was performed on each problem instance. In other words, 15 runs were performed for each problem. We set the number of variables to 2, 3, 5, 10, and 20.

We used an automatic performance indicator () (Brockhoff et al., 2016) provided by COCO. COCO uses an unbounded external archive to maintain all non-dominated solutions found so far. When there exists at least a single solution in the archive that dominates a reference point in the normalized objective space , the performance of optimizers is measured by a referenced version of the hypervolume indicator (Zitzler and Thiele, 1998) using the archive. Otherwise, the performance of optimizers is measured by the smallest distance to the region of interest, which is bounded by the nadir point.

We compare TPB with HMO-CMA-ES (Loshchilov and Glasmachers, 2016), ParEGO (Knowles, 2006), MOTPE (Ozaki et al., 2020), K-RVEA (Chugh et al., 2018), KTA2 (Song et al., 2021), and EDN-ARMOEA (Guo et al., 2022). We demonstrate the effectiveness of the second phase in TPB by comparing with the warm start phase in HMO-CMA-ES, which is based on a sophisticated scalarizing approach. We are also interested in the performance of TPB compared to state-of-the-art meta-model-based optimizers. We used the optuna (Akiba et al., 2019) implementation of MOTPE and the PlatEMO (Tian et al., 2017) implementation of the surrogate-assisted EMO algorithms. We used the results of HMO-CMA-ES provided by the COCO data archive (https://numbbo.github.io/data-archive).

We set the control parameters for TPB based on the discussion in Section 4.3.1, i.e., and . We used BOBYQA and as optimizer and , respectively. Here, we evaluated the performance of TPB with on the first BBOB problem with in our preliminary study. We set to based on the rough hand-tuning results. We used the Py-BOBYQA (Cartis et al., 2019) implementation of BOBYQA. Unlike HMO-CMA-ES, we used the default parameter setting of BOBYQA. For the Bézier simplex model fitting method, we used the code provided by the authors of (Kobayashi et al., 2019) (https://gitlab.com/hmkz/pytorch-bsf). We used a workstation with an Intel(R) 48-Core Xeon Platinum 8260 (24-Core2) 2.4GHz and 384GB RAM using Ubuntu 18.04.

We set the maximum budget of function evaluations (budget) to , , and . As discussed in Section 4.3.2, TPB is not an anytime algorithm. Thus, the behavior of TPB depends on the termination condition, i.e., budget in this study. The performance of some state-of-the-art surrogate-assisted EMO algorithms (e.g., K-RVEA and KTA2) also depends on budget. This is because they are not anytime algorithms similar to TPB. For example, K-RVEA has a temperature-like parameter that determines the magnitude of the penalty value. Generally, the best parameter setting for EMO algorithms depends on budget (Dymond et al., 2013; Bezerra et al., 2018). In addition, budget has not been standardized in the field of computationally expensive multi-objective optimization. For the above-discussed reasons, we used the three budget settings.

6. Results

This section describes our analysis results. Through experiments, Sections 6.16.4 address the following research questions.

Section 6.1 How does TPB interpolate solutions?

Section 6.2 Is TPB competitive with state-of-the-art optimizers?

Section 6.3 How important is the two-phase mechanism in TPB?

Section 6.4 How does the choice of and influence the performance of TPB?

6.1. On the solution interpolation in TPB

Figure 3 shows the distribution of solutions generated by TPB for budget . Figure 3 shows the results on the first instance of with . Note that the solution interpolation in TPB is performed in the search space (Figure 3(a)), not the objective space (Figure 3(b)). In Figure 3, we confirmed that budget and budget .

In Figure 3, the three blue filled circles represent the three solutions in found in the first phase with the three weight vectors , , and . In contrast, the four orange unfilled circles represent the four solutions generated in the second phase with the four parameters that are the same as in the example in Section 4.2.

As shown in Figure 3, the three solutions obtained in the first phase are well-converged to the Pareto front, but they are sparsely distributed. The second phase makes up for this shortcoming. As seen from Figure 3, the four solutions generated in the second phase incorporate the three solutions. The four solutions are distributed as if they were obtained by a scalar optimization approach with the four weight vectors and . As a result, TPB could obtain the seven well-converged and well-distributed solutions. As demonstrated here, the first and second phases in TPB are complementary to each other.

(a) Search space
(b) Objective space
Figure 3. Distribution of solutions obtained by TPB for for in the search and objective spaces.

6.2. Comparison with state-of-the-art optimizers

Figure 16 shows the results of TPB and the six optimizers on the 55 bi-objective BBOB problems with and budget . Recall that budget is the maximum budget of function evaluations. We do not show the results for , but they are similar to the results for . Most meta-model-based optimizers require extremely high computational cost, especially for higher dimensions and larger budgets. Experiments on the 825 () BBOB instances for each dimension is also time-consuming. For these reasons, we stopped an optimizer when it did not finish within a week. The missing results in Figure 16 indicate that the corresponding optimizer was stopped before reaching budget, e.g., the results of KTA2 for in Figure 16(c). In Figures 16, “best 2016” shows the performance of a virtual best solver constructed based on the results of 15 optimizers participating in the GECCO BBOB 2016 workshop. Thus, “best 2016” does not mean the best actual optimizer. The cross in Figure 16 shows the number of function evaluations used in each optimizer. Since ParEGO, K-RVEA, KTA2, and EDN-ARMOEA cannot stop exactly at a pre-defined budget, their crosses exceed budget in some cases, e.g., the results of KTA2 for in Figure 16(a).

(a) and budget
(b) and budget
(c) and budget
(d) and budget
(e) and budget
(f) and budget
(g) and budget
(h) and budget
(i) and budget
(j) and budget
(k) and budget
(l) and budget
Figure 16. Comparison with state-of-the-art optimizers. “HMO-CMA” and “EDN” stand for HMO-CMA-ES and EDN-ARMOEA, respectively.

Figure 16 shows the bootstrapped empirical cumulative distribution (ECDF) (Brockhoff et al., 2015; Hansen et al., 2016) based on the results on all 55 bi-objective BBOB problems. We used the COCO postprocessing tool cocopp with the expensive option --expensive to generate all ECDF figures in this paper. For each problem instance, let be the indicator value of the Pareto optimal solution set. Let also be a target value to reach, where is any one of 31 precision levels in the expensive setting. Thus, 31 values are available for each problem instance. The vertical axis in the ECDF figure represents the proportion of values reached by the corresponding optimizer within specified function evaluations. Here, the horizontal axis represents the number of function evaluations. For example, Figure 16(d) indicates that HMO-CMA-ES solved about 60 % of the 31 values within evaluations for .

Statistical significance is tested with the rank-sum test for a given value by using COCO. Due to space limitation, we show the results in the supplementary material. Note that the statistical test results are generally consistent with the results in Figure 16.

As shown in Figure 16, HMO-CMA-ES is the clear winner within function evaluations for any . The five meta-model-based optimizers perform almost the same until function evaluations. This is because they generate the initial solution set of size by Latin hypercube sampling. These results suggest that scalarization-based approaches with BOBYQA as in HMO-CMA-ES perform the best when only a very small number of function evaluations (i.e., evaluations) are available.

Some meta-model-based optimizers (e.g., ParEGO and K-RVEA) perform better than HMO-CMA-ES for more than evaluations, especially for larger budgets. We observed that the ranks of some meta-model-based optimizers depend on the maximum budget. For example, for , as shown in Figure 16(b), KTA2 performs the worst when budget . In contrast, as shown in Figure 16(f), KTA2 performs the best at the end of the run when budget . These observations indicate that the performance of some meta-model-based optimizers is sensitive to budget. One may wonder about the high performance of ParEGO. We believe that this is due to the performance evaluation based on the unbounded external archive. Although an analysis of the performance of meta-model-based optimizers is beyond the scope of this paper, it is an interesting research direction.

As seen from Figures 16(a), (e), and (i), TPB performs poorly compared to the state-of-the-art optimizers for . In contrast, TPB achieves a good performance at the end of the run for . As shown in Figures 16(c), (d), (g), (h), (k), and (l), TPB is the best performer at the end of the run for and . These results indicate the effectiveness of TPB for budget, , and for .

Figure 17. Average computation time (sec) of each optimizer over the 15 instances of for budget .
(a)
(b)
(c)
(d)
Figure 22. Comparison with state-of-the-art optimizers on four selected problems with (budget ).

Figure 17 shows the average computation time of each optimizer over the 15 instances of for budget . We expect that the computation time of HMO-CMA-ES is the same or less than that of TPB. We could not measure the computation time of ParEGO, KTA2, and EDN-ARMOEA for in practical time due to their high computational cost. As seen from Figure 17, the computation time of TPB is lower than those of the five meta-model-based optimizers, except for the results of MOTPE for . The computation of TPB took approximately 6.6 seconds even for . These results indicate that TPB is faster than meta-model-based optimizers in terms of computation time.

Figure 22 shows the results on , , , and , which are the multi-objective versions of the Sphere, Rosenbrock, (rotated) Rastrigin, and (rotated) Schwefel functions. As discussed in Section 4.3, the Bézier simplex model-based interpolation method assumes that a given problem is simplicial. Although an in-depth theoretical analysis is needed, we believe that the 15 unimodal (and weakly-multimodal) bi-objective BBOB problems satisfy the assumption, including and . As shown in Figures 22(a) and (b), TPB obtains a good performance on and . The results on other unimodal problems (except for , , and ) are relatively similar to Figures 22(a) and (b). In contrast, the remaining 40 multi-modal bi-objective BBOB problems do not satisfy the assumption, including and . As seen from Figure 22(c), the poor performance of TPB on is consistent with our intuition. However, Figure 22(d) shows that TPB unexpectedly performs the best on . Similar results were observed on other ten multimodal problems (e.g., and ). These results suggest that the solution interpolation method can possibly perform well even when a given problem is not simplicial. A further investigation is needed in future research.

In summary, we demonstrated the effectiveness of TPB for computationally expensive multi-objective optimization. Our results on the bi-objective BBOB problems show that TPB performs better than HMO-CMA-ES and meta-model-based optimizers for . We also observed that TPB is computationally cheaper than meta-model-based optimizers for .

6.3. Importance of the two-phase mechanism

Here, let us consider the first phase-only TPB (TPB1) and the second phase-only TPB (TPB2). We investigate the importance of the two-phase mechanism in TPB by comparing it with TPB1 and TPB2. While TPB1 does not perform the second phase, TPB2 does not perform the first phase. First, as in most meta-model-based optimizers (e.g., K-RVEA), TPB2 generates the initial solution set of size by Latin hypercube sampling. Then, TPB2 performs the second phase based on the best out of the solutions.

Figure 25 shows the comparison of TPB, TPB1, and TPB2 on the 55 bi-objective BBOB problems with and for budget . Note that the results for are similar to the results for . The results show that TPB1 performs worse than TPB at the end of the run for and . Interestingly, as shown in Figure 25(a), TPB2 outperforms TPB for . We observed that TPB2 performs well on multimodal problems for . However, as seen from Figure 25(b), TPB2 performs significantly poorly for . These results demonstrate the effectiveness of the two-phase mechanism in TPB.

(a)
(b)
Figure 25. Comparison of TPB, TPB1, and TPB2.

6.4. Impact of and

Although Section 4.3.1 gave the default values of and , it is important to understand their impact on the performance of TPB. Figure 28 shows the results of TPB with and on the 55 bi-objective BBOB problems for , where budget and . For example, “K3-r0.9” represents the results of TPB with and . For the sake of clarity, Figure 28 shows only the results of TPB with the three best parameter settings and the three worst parameter settings.

As seen from Figure 28(a), the best performance of TPB for budget is obtained when using and . In contrast, as shown in Figure 28(b), TPB with and performs the best for budget . Figure 28 shows that the gap between the best and worst performance of TPB is relatively small for budget . Although we do not show detailed results here, we observed that the best setting of and depends on a problem, , and budget. These results suggest that the performance of TPB can be further improved by tuning the and values. However, and can be a good first choice for .

(a) budget
(b) budget
Figure 28. Comparison of TPB with various and .

7. Conclusion

We have proposed TPB for computationally expensive multi-objective black-box optimization. The first phase in TPB fully exploits an efficient derivative-free optimizer to find well-approximated solutions of scalar problems with a small budget of function evaluations, where . The second phase in TPB interpolates the solutions by the Bézier simplex model-based method that exploits the property of the Pareto optimal solution set. Our results show that TPB performs significantly better than HMO-CMA-ES and some state-of-the-art meta-model-based multi-objective optimizers on the bi-objective BBOB problems with when the maximum budget of function evaluations is set to , , and . We have also investigated the property of TPB.

We believe that TPB gives a new perspective on the field of computationally expensive multi-objective optimization. Although the EMO community has mainly focused on meta-model-based approaches for computationally expensive optimization, TPB provides a new research direction. It may also be interesting to extend TPB to preference-based multi-objective optimization.

Acknowledgements.
This work was supported by JSPS KAKENHI Grant Number 21K17824. We thank Dr. Ilya Loshchilov for providing the code of HMO-CMA-ES.

References

  • (1)
  • Akiba et al. (2019) Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019.

    Optuna: A Next-generation Hyperparameter Optimization Framework. In

    Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, Ankur Teredesai, Vipin Kumar, Ying Li, Rómer Rosales, Evimaria Terzi, and George Karypis (Eds.). ACM, 2623–2631.
    https://doi.org/10.1145/3292500.3330701
  • Bajer et al. (2019) Lukás Bajer, Zbynek Pitra, Jakub Repický, and Martin Holena. 2019. Gaussian Process Surrogate Models for the CMA Evolution Strategy. Evol. Comput. 27, 4 (2019), 665–697. https://doi.org/10.1162/evco_a_00244
  • Beume et al. (2007) Nicola Beume, Boris Naujoks, and Michael T. M. Emmerich. 2007. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 181, 3 (2007), 1653–1669. https://doi.org/10.1016/j.ejor.2006.08.008
  • Bezerra et al. (2018) Leonardo C. T. Bezerra, Manuel López-Ibáñez, and Thomas Stützle. 2018. A Large-Scale Experimental Evaluation of High-Performing Multi- and Many-Objective Evolutionary Algorithms. Evol. Comput. 26, 4 (2018). https://doi.org/10.1162/evco_a_00217
  • Bhattacharjee et al. (2017) Kalyan Shankar Bhattacharjee, Hemant Kumar Singh, and Tapabrata Ray. 2017. An approach to generate comprehensive piecewise linear interpolation of pareto outcomes to aid decision making. J. Glob. Optim. 68, 1 (2017), 71–93. https://doi.org/10.1007/s10898-016-0454-0
  • Borges and Pastva (2002) Carlos F. Borges and Tim Pastva. 2002. Total least squares fitting of Bézier and B-spline curves to ordered data. Computer Aided Geometric Design 19, 4 (2002), 275–289. https://doi.org/10.1016/s0167-8396(02)00088-2
  • Bouzarkouna et al. (2011) Zyed Bouzarkouna, Anne Auger, and Didier Yu Ding. 2011. Local-meta-model CMA-ES for partially separable functions. In 13th Annual Genetic and Evolutionary Computation Conference, GECCO 2011, Proceedings, Dublin, Ireland, July 12-16, 2011, Natalio Krasnogor and Pier Luca Lanzi (Eds.). ACM, 869–876. https://doi.org/10.1145/2001576.2001695
  • Brockhoff et al. (ress) Dimo Brockhoff, Anne Auger, Nikolaus Hansen, and Tea Tušar. 2022 (in press). Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites. Evol. Comput. (2022 (in press)).
  • Brockhoff et al. (2021) Dimo Brockhoff, Baptiste Plaquevent-Jourdain, Anne Auger, and Nikolaus Hansen. 2021. DMS and MultiGLODS: black-box optimization benchmarking of two direct search methods on the bbob-biobj test suite. In GECCO ’21: Genetic and Evolutionary Computation Conference, Companion Volume, Lille, France, July 10-14, 2021, Krzysztof Krawiec (Ed.). ACM, 1251–1258. https://doi.org/10.1145/3449726.3463207
  • Brockhoff et al. (2015) Dimo Brockhoff, Thanh-Do Tran, and Nikolaus Hansen. 2015. Benchmarking Numerical Multiobjective Optimizers Revisited. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2015, Madrid, Spain, July 11-15, 2015, Sara Silva and Anna Isabel Esparcia-Alcázar (Eds.). ACM, 639–646. https://doi.org/10.1145/2739480.2754777
  • Brockhoff et al. (2016) Dimo Brockhoff, Tea Tusar, Dejan Tusar, Tobias Wagner, Nikolaus Hansen, and Anne Auger. 2016. Biobjective Performance Assessment with the COCO Platform. CoRR abs/1605.01746 (2016). arXiv:1605.01746 http://arxiv.org/abs/1605.01746
  • Cartis et al. (2019) Coralia Cartis, Jan Fiala, Benjamin Marteau, and Lindon Roberts. 2019. Improving the Flexibility and Robustness of Model-based Derivative-free Optimization Solvers. ACM Trans. Math. Softw. 45, 3 (2019), 32:1–32:41. https://doi.org/10.1145/3338517
  • Chen et al. (2020) Weiyu Chen, Hisao Ishibuchi, and Ke Shang. 2020. Proposal of a Realistic Many-Objective Test Suite. In Parallel Problem Solving from Nature - PPSN XVI - 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I (Lecture Notes in Computer Science, Vol. 12269), Thomas Bäck, Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and Heike Trautmann (Eds.). Springer, 201–214.
  • Chugh et al. (2018) Tinkle Chugh, Yaochu Jin, Kaisa Miettinen, Jussi Hakanen, and Karthik Sindhya. 2018. A Surrogate-Assisted Reference Vector Guided Evolutionary Algorithm for Computationally Expensive Many-Objective Optimization. IEEE Trans. Evol. Comput. 22, 1 (2018), 129–142. https://doi.org/10.1109/TEVC.2016.2622301
  • Chugh et al. (2019) Tinkle Chugh, Karthik Sindhya, Jussi Hakanen, and Kaisa Miettinen. 2019. A survey on handling computationally expensive multiobjective optimization problems with evolutionary algorithms. Soft Comput. 23, 9 (2019), 3137–3166. https://doi.org/10.1007/s00500-017-2965-0
  • Custódio et al. (2011) Ana Luísa Custódio, J. F. Aguilar Madeira, A. Ismael F. Vaz, and Luís Nunes Vicente. 2011. Direct Multisearch for Multiobjective Optimization. SIAM J. Optim. 21, 3 (2011), 1109–1140. https://doi.org/10.1137/10079731X
  • Daniels et al. (2018) Steven J. Daniels, Alma As-Aad Mohammad Rahat, Richard M. Everson, Gavin R. Tabor, and Jonathan E. Fieldsend. 2018. A Suite of Computationally Expensive Shape Optimisation Problems Using Computational Fluid Dynamics. In Parallel Problem Solving from Nature - PPSN XV - 15th International Conference, Coimbra, Portugal, September 8-12, 2018, Proceedings, Part II (Lecture Notes in Computer Science, Vol. 11102), Anne Auger, Carlos M. Fonseca, Nuno Lourenço, Penousal Machado, Luís Paquete, and L. Darrell Whitley (Eds.). Springer, 296–307. https://doi.org/10.1007/978-3-319-99259-4_24
  • Deb et al. (2002) Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T. Meyarivan. 2002.

    A fast and elitist multiobjective genetic algorithm: NSGA-II.

    IEEE Trans. Evol. Comput. 6, 2 (2002), 182–197. https://doi.org/10.1109/4235.996017
  • Deb et al. (2005) Kalyanmoy Deb, Lothar Thiele, Marco Laumanns, and Eckart Zitzler. 2005. Scalable Test Problems for Evolutionary Multi-Objective Optimization. In Evolutionary Multiobjective Optimization. Theoretical Advances and Applications. Springer, 105–145.
  • Dubois-Lacoste et al. (2011) Jérémie Dubois-Lacoste, Manuel López-Ibáñez, and Thomas Stützle. 2011. Improving the anytime behavior of two-phase local search. Ann. Math. Artif. Intell. 61, 2 (2011), 125–154. https://doi.org/10.1007/s10472-011-9235-0
  • Dubois-Lacoste et al. (2013) Jérémie Dubois-Lacoste, Manuel López-Ibáñez, and Thomas Stützle. 2013. Combining Two Search Paradigms for Multi-objective Optimization: Two-Phase and Pareto Local Search. In Hybrid Metaheuristics, El-Ghazali Talbi (Ed.). Studies in Computational Intelligence, Vol. 434. Springer, 97–117. https://doi.org/10.1007/978-3-642-30671-6_3
  • Dymond et al. (2013) Antoine S. D. Dymond, Schalk Kok, and P. Stephan Heyns. 2013. The sensitivity of multi-objective optimization algorithm performance to objective function evaluation budgets. In Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2013, Cancun, Mexico, June 20-23, 2013. IEEE, 1868–1875. https://doi.org/10.1109/CEC.2013.6557787
  • Farin (2002) G.E. Farin. 2002. Curves and Surfaces for CAGD: A Practical Guide. Morgan Kaufmann. https://books.google.co.jp/books?id=5HYTP1dIAp4C
  • Giagkiozis and Fleming (2014) Ioannis Giagkiozis and Peter J. Fleming. 2014. Pareto Front Estimation for Decision Making. Evol. Comput. 22, 4 (2014), 651–678. https://doi.org/10.1162/EVCO_a_00128
  • Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, Zoubin Ghahramani, Max Welling, Corinna Cortes, Neil D. Lawrence, and Kilian Q. Weinberger (Eds.). 2672–2680. https://proceedings.neurips.cc/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html
  • Guo et al. (2022) Dan Guo, Xilu Wang, Kailai Gao, Yaochu Jin, Jinliang Ding, and Tianyou Chai. 2022. Evolutionary Optimization of High-Dimensional Multiobjective and Many-Objective Expensive Problems Assisted by a Dropout Neural Network. IEEE Trans. Syst. Man Cybern. Syst. 52, 4 (2022), 2084–2097. https://doi.org/10.1109/TSMC.2020.3044418
  • Hamada et al. (2020) Naoki Hamada, Kenta Hayano, Shunsuke Ichiki, Yutaro Kabata, and Hiroshi Teramoto. 2020. Topology of Pareto Sets of Strongly Convex Problems. SIAM Journal on Optimization 30, 3 (2020), 2659–2686. https://doi.org/10.1137/19M1271439 arXiv:https://doi.org/10.1137/19M1271439
  • Hamada et al. (2008) Naoki Hamada, Jun Sakuma, Shigenobu Kobayashi, and Isao Ono. 2008. Functional-Specialization Multi-Objective Real-Coded Genetic Algorithm: FS-MOGA. In Parallel Problem Solving from Nature - PPSN X, 10th International Conference Dortmund, Germany, September 13-17, 2008, Proceedings (Lecture Notes in Computer Science, Vol. 5199), Günter Rudolph, Thomas Jansen, Simon M. Lucas, Carlo Poloni, and Nicola Beume (Eds.). Springer, 691–701. https://doi.org/10.1007/978-3-540-87700-4_69
  • Hansen (2019) Nikolaus Hansen. 2019. A global surrogate assisted CMA-ES. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13-17, 2019, Anne Auger and Thomas Stützle (Eds.). ACM, 664–672. https://doi.org/10.1145/3321707.3321842
  • Hansen et al. (2016) Nikolaus Hansen, Anne Auger, Dimo Brockhoff, Dejan Tusar, and Tea Tusar. 2016. COCO: Performance Assessment. CoRR abs/1605.03560 (2016). arXiv:1605.03560 http://arxiv.org/abs/1605.03560
  • Hansen et al. (2010) Nikolaus Hansen, Anne Auger, Raymond Ros, Steffen Finck, and Petr Posík. 2010. Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009. In Genetic and Evolutionary Computation Conference, GECCO 2010, Proceedings, Portland, Oregon, USA, July 7-11, 2010, Companion Material, Martin Pelikan and Jürgen Branke (Eds.). ACM, 1689–1696. https://doi.org/10.1145/1830761.1830790
  • Hansen et al. (2021) Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tušar, and Dimo Brockhoff. 2021. COCO: a platform for comparing continuous optimizers in a black-box setting. Optim. Methods Softw. 36, 1 (2021), 114–144.
  • Hansen et al. (2009) Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Technical Report RR-6829. INRIA.
  • Hartikainen et al. (2011) Markus Hartikainen, Kaisa Miettinen, and Margaret M. Wiecek. 2011. Constructing a Pareto front approximation for decision making. Math. Methods Oper. Res. 73, 2 (2011), 209–234. https://doi.org/10.1007/s00186-010-0343-0
  • Hartikainen et al. (2012) Markus Hartikainen, Kaisa Miettinen, and Margaret M. Wiecek. 2012. PAINT: Pareto front interpolation for nonlinear multiobjective optimization. Comput. Optim. Appl. 52, 3 (2012), 845–867. https://doi.org/10.1007/s10589-011-9441-z
  • Hirano and Yoshikawa (2013) Hiroyuki Hirano and Tomohiro Yoshikawa. 2013. A study on two-step search based on PSO to improve convergence and diversity for Many-Objective Optimization Problems. In Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2013, Cancun, Mexico, June 20-23, 2013. IEEE, 1854–1859. https://doi.org/10.1109/CEC.2013.6557785
  • Hu et al. (2017) Wang Hu, Gary G. Yen, and Guangchun Luo. 2017.

    Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    IEEE Trans. Cybern. 47, 6 (2017), 1446–1459. https://doi.org/10.1109/TCYB.2016.2548239
  • Huband et al. (2006) Simon Huband, Philip Hingston, Luigi Barone, and Lyndon While. 2006. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10, 5 (2006), 477–506. https://doi.org/10.1109/TEVC.2005.861417
  • Hutter et al. ([n. d.]) Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. [n. d.]. Sequential Model-Based Optimization for General Algorithm Configuration, Carlos A. Coello Coello (Ed.).
  • Igel et al. (2006) Christian Igel, Thorsten Suttorp, and Nikolaus Hansen. 2006. Steady-State Selection and Efficient Covariance Matrix Update in the Multi-objective CMA-ES. In Evolutionary Multi-Criterion Optimization, 4th International Conference, EMO 2007, Matsushima, Japan, March 5-8, 2007, Proceedings (Lecture Notes in Computer Science, Vol. 4403), Shigeru Obayashi, Kalyanmoy Deb, Carlo Poloni, Tomoyuki Hiroyasu, and Tadahiko Murata (Eds.). Springer, 171–185. https://doi.org/10.1007/978-3-540-70928-2_16
  • Ishibuchi et al. (2017) Hisao Ishibuchi, Yu Setoguchi, Hiroyuki Masuda, and Yusuke Nojima. 2017. Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes. IEEE Trans. Evol. Comput. 21, 2 (2017), 169–190. https://doi.org/10.1109/TEVC.2016.2587749
  • Knowles (2006) Joshua D. Knowles. 2006. ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10, 1 (2006), 50–66. https://doi.org/10.1109/TEVC.2005.851274
  • Kobayashi et al. (2019) Ken Kobayashi, Naoki Hamada, Akiyoshi Sannai, Akinori Tanaka, Kenichi Bannai, and Masashi Sugiyama. 2019. Bézier Simplex Fitting: Describing Pareto Fronts of Simplicial Problems with Small Samples in Multi-Objective Optimization. In

    The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019

    . AAAI Press, 2304–2313.
    https://doi.org/10.1609/aaai.v33i01.33012304
  • Kraft (1988) Dieter Kraft. 1988. A Software Package for Sequential Quadratic Programming. Technical Report DFVLR-FB 88-28. Deutsche Forschungs- und Versuchsanstalt für Luft- und Raumfahrt.
  • Loshchilov and Glasmachers (2016) Ilya Loshchilov and Tobias Glasmachers. 2016. Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES). In Genetic and Evolutionary Computation Conference, GECCO 2016, Denver, CO, USA, July 20-24, 2016, Companion Material Proceedings, Tobias Friedrich, Frank Neumann, and Andrew M. Sutton (Eds.). ACM, 1169–1176. https://doi.org/10.1145/2908961.2931698
  • Maree et al. (2020) Stefanus C. Maree, Tanja Alderliesten, and Peter A. N. Bosman. 2020. Ensuring Smoothly Navigable Approximation Sets by Bézier Curve Parameterizations in Evolutionary Bi-objective Optimization. In Parallel Problem Solving from Nature - PPSN XVI - 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part II (Lecture Notes in Computer Science, Vol. 12270), Thomas Bäck, Mike Preuss, André H. Deutz, Hao Wang, Carola Doerr, Michael T. M. Emmerich, and Heike Trautmann (Eds.). Springer, 215–228. https://doi.org/10.1007/978-3-030-58115-2_15
  • Mastroddi and Gemma (2013) Franco Mastroddi and Stefania Gemma. 2013. Analysis of Pareto frontiers for multidisciplinary design optimization of aircraft. Aerospace Science and Technology 28, 1 (2013), 40–55. https://doi.org/10.1016/j.ast.2012.10.003
  • Miettinen (1998) Kaisa Miettinen. 1998. Nonlinear Multiobjective Optimization. Springer.
  • Ozaki et al. (2020) Yoshihiko Ozaki, Yuki Tanigaki, Shuhei Watanabe, and Masaki Onishi. 2020. Multiobjective tree-structured parzen estimator for computationally expensive optimization problems. In GECCO ’20: Genetic and Evolutionary Computation Conference, Cancún Mexico, July 8-12, 2020, Carlos Artemio Coello Coello (Ed.). ACM, 533–541. https://doi.org/10.1145/3377930.3389817
  • Paquete and Stützle (2003) Luís Paquete and Thomas Stützle. 2003. A Two-Phase Local Search for the Biobjective Traveling Salesman Problem. In Evolutionary Multi-Criterion Optimization, Second International Conference, EMO 2003, Faro, Portugal, April 8-11, 2003, Proceedings (Lecture Notes in Computer Science, Vol. 2632), Carlos M. Fonseca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele (Eds.). Springer, 479–493. https://doi.org/10.1007/3-540-36970-8_34
  • Posík and Huyer (2012) Petr Posík and Waltraud Huyer. 2012. Restarted Local Search Algorithms for Continuous Black Box Optimization. Evol. Comput. 20, 4 (2012), 575–607. https://doi.org/10.1162/EVCO_a_00087
  • Powell (2008) M. J. D. Powell. 2008. Developments of NEWUOA for minimization without derivatives. IMA J. Numer. Anal. 28, 4 (2008), 649–664. https://doi.org/10.1093/imanum/drm047
  • Powell (2009) M. J. D. Powell. 2009. The BOBYQA algorithm for bound constrained optimization without derivatives. Technical Report DAMTP 2009/NA06. University of Cambridge.
  • Regis (2021) Rommel G. Regis. 2021. A two-phase surrogate approach for high-dimensional constrained discrete multi-objective optimization. In GECCO ’21: Genetic and Evolutionary Computation Conference, Companion Volume, Lille, France, July 10-14, 2021, Krzysztof Krawiec (Ed.). ACM, 1870–1878. https://doi.org/10.1145/3449726.3463204
  • Rios and Sahinidis (2013) Luis Miguel Rios and Nikolaos V. Sahinidis. 2013. Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Glob. Optim. 56, 3 (2013), 1247–1293. https://doi.org/10.1007/s10898-012-9951-y
  • Shoval et al. (2012) O. Shoval, H. Sheftel, G. Shinar, Y. Hart, O. Ramote, A. Mayo, E. Dekel, K. Kavanagh, and U. Alon. 2012. Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space. Science 336, 6085 (2012), 1157–1160. https://doi.org/10.1126/science.1217405 arXiv:http://science.sciencemag.org/content/336/6085/1157.full.pdf
  • Song et al. (2021) Zhenshou Song, Handing Wang, Cheng He, and Yaochu Jin. 2021. A Kriging-Assisted Two-Archive Evolutionary Algorithm for Expensive Many-Objective Optimization. IEEE Trans. Evol. Comput. 25, 6 (2021), 1013–1027. https://doi.org/10.1109/TEVC.2021.3073648
  • Tabatabaei et al. (2015) Mohammad Tabatabaei, Jussi Hakanen, Markus Hartikainen, Kaisa Miettinen, and Karthik Sindhya. 2015. A survey on handling computationally expensive multiobjective optimization problems using surrogates: non-nature inspired methods. Struct. Multidiscipl. Optim. 52 (2015), 1–25.
  • Tanabe and Ishibuchi (2020) Ryoji Tanabe and Hisao Ishibuchi. 2020. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing 89 (2020), 106078. https://doi.org/10.1016/j.asoc.2020.106078
  • Tian et al. (2017) Ye Tian, Ran Cheng, Xingyi Zhang, and Yaochu Jin. 2017. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum]. IEEE Comput. Intell. Mag. 12, 4 (2017), 73–87. https://doi.org/10.1109/MCI.2017.2742868
  • Touré et al. (2019) Cheikh Touré, Nikolaus Hansen, Anne Auger, and Dimo Brockhoff. 2019. Uncrowded hypervolume improvement: COMO-CMA-ES and the sofomore framework. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13-17, 2019, Anne Auger and Thomas Stützle (Eds.). ACM, 638–646. https://doi.org/10.1145/3321707.3321852
  • Vrugt et al. (2003) Jasper A. Vrugt, Hoshin V. Gupta, Luis A. Bastidas, Willem Bouten, and Soroosh Sorooshian. 2003. Effective and Efficient Algorithm for Multiobjective Optimization of Hydrologic Models. Water Resources Research 39, 8 (2003), 1214–1232. https://doi.org/10.1029/2002WR001746
  • Wang et al. (ress) Zhenzhong Wang, Haokai Hong, Kai Ye, Guang-En Zhang, Min Jiang, and Kay Chen Tan. 2022 (in press).

    Manifold Interpolation for Large-Scale Multiobjective Optimization via Generative Adversarial Networks.

    IEEE Trans. Neural Networks Learn. Syst. (2022 (in press)).
  • Yang et al. (2019) Kaifeng Yang, Pramudita Satria Palar, Michael Emmerich, Koji Shimoyama, and Thomas Bäck. 2019. A multi-point mechanism of expected hypervolume improvement for parallel multi-objective bayesian global optimization. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Czech Republic, July 13-17, 2019, Anne Auger and Thomas Stützle (Eds.). ACM, 656–663. https://doi.org/10.1145/3321707.3321784
  • Zhang and Li (2007) Qingfu Zhang and Hui Li. 2007. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 11, 6 (2007), 712–731. https://doi.org/10.1109/TEVC.2007.892759
  • Zhang et al. (2008) Qingfu Zhang, Aimin Zhou, and Yaochu Jin. 2008. RM-MEDA: A Regularity Model-Based Multiobjective Estimation of Distribution Algorithm. IEEE Trans. Evol. Comput. 12, 1 (2008), 41–63. https://doi.org/10.1109/TEVC.2007.894202
  • Zitzler and Thiele (1998) Eckart Zitzler and Lothar Thiele. 1998. Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study. In Parallel Problem Solving from Nature - PPSN V, 5th International Conference, Amsterdam, The Netherlands, September 27-30, 1998, Proceedings (Lecture Notes in Computer Science, Vol. 1498), A. E. Eiben, Thomas Bäck, Marc Schoenauer, and Hans-Paul Schwefel (Eds.). Springer, 292–304. https://doi.org/10.1007/BFb0056872