Meta-Learning for Black-box Optimization

07/16/2019 ∙ by Vishnu TV, et al. ∙ 0

Recently, neural networks trained as optimizers under the "learning to learn" or meta-learning framework have been shown to be effective for a broad range of optimization tasks including derivative-free black-box function optimization. Recurrent neural networks (RNNs) trained to optimize a diverse set of synthetic non-convex differentiable functions via gradient descent have been effective at optimizing derivative-free black-box functions. In this work, we propose RNN-Opt: an approach for learning RNN-based optimizers for optimizing real-parameter single-objective continuous functions under limited budget constraints. Existing approaches utilize an observed improvement based meta-learning loss function for training such models. We propose training RNN-Opt by using synthetic non-convex functions with known (approximate) optimal values by directly using discounted regret as our meta-learning loss function. We hypothesize that a regret-based loss function mimics typical testing scenarios, and would therefore lead to better optimizers compared to optimizers trained only to propose queries that improve over previous queries. Further, RNN-Opt incorporates simple yet effective enhancements during training and inference procedures to deal with the following practical challenges: i) Unknown range of possible values for the black-box function to be optimized, and ii) Practical and domain-knowledge based constraints on the input parameters. We demonstrate the efficacy of RNN-Opt in comparison to existing methods on several synthetic as well as standard benchmark black-box functions along with an anonymized industrial constrained optimization problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Several practical optimization problems such as process black-box optimization for complex dynamical systems pose a unique challenge owing to the restriction on the number of possible function evaluations. Such black-box functions do not have a simple closed form but can be evaluated (queried) at any arbitrary query point in the domain. However, evaluation of real-world complex processes is expensive and time consuming, therefore the optimization algorithm must optimize while employing as few real-world function evaluations as possible. Most practical optimization problems are constrained in nature, i.e. have one or more constraints on the values of input parameters. In this work, we focus on real-parameter single-objective black-box optimization (BBO) where the goal is to obtain a value as close to the maximum value of the objective function as possible by adjusting the values of the real-valued continuous input parameters while ensuring domain constraints are not violated. We further assume a limited budget, i.e. assume that querying the black-box function is expensive and thus only a small number of queries can be made.

Efficient global optimization of expensive black-box functions [14] requires proposing the next query (input parameter values) to the black-box function based on past queries and the corresponding responses (function evaluations). BBO can be mapped to the problem of proposing the next query given past queries and the corresponding responses such that the expected improvement in the function value is maximized, as in Bayesian Optimization approaches [4]. While most research in optimization has focused on engineering algorithms catering to specific classes of problems, recent meta-learning [24] approaches, e.g. [2, 18, 5, 27, 7], cast design of an optimization algorithm as a learning problem rather than the traditional hand-engineering approach, and then, propose approaches to train neural networks that learn to optimize

. In contrast to a traditional machine learning approach involving training of a neural network on a single task using training data samples so that it can generalize to unseen data samples from the same data distribution, here the neural network is trained on a distribution of similar tasks (in our case optimization tasks) so as to learn a strategy that generalizes to related but unseen tasks from a similar task distribution. The meta-learning approaches attempt to train a single network to optimize several functions at once such that the network can effectively generalize to optimize unseen functions.

Recently, [5]

proposed a meta-learning approach wherein a recurrent neural network (RNN with gated units such as Long Short Term Memory (LSTM)

[9]) learns to optimize a large number of diverse synthetic non-convex functions to yield a learned task-independent optimizer. The RNN iteratively uses the sequence of past queries and corresponding responses to propose the next query in order to maximize the observed improvement (OI) in the response value. We refer to this approach as RNN-OI in this work. Once the RNN is trained to optimize a diverse set of synthetic functions by using gradient descent, it is able to generalize well to solve unseen derivative-free black-box optimization problems [5, 29]. Such learned optimizers are shown to be faster in terms of the time taken to propose the next query compared to Bayesian optimizers as they do not require any matrix inversion or optimization of acquisition functions, and also have lower regret values within the training horizon, i.e. the number of steps of the optimization process for which the RNN is trained to generate queries.

Key contributions of this work and the challenges addressed can be summarized as follows:

  1. Regret-based loss function: We hypothesize that training an RNN optimizer using a loss function that minimizes the regret observed for a given number of queries more closely resembles the performance measure of an optimizer. So it is better than a loss function based on OI such as the one used in [5, 29]. To this end, we propose a simple yet highly effective loss function that yields superior results than the existing OI loss for black-box optimization. Regret of the optimizer is the difference between the optimal value (maximum of the black-box function) and the realized maximum value.

  2. Deal with lack of prior knowledge on range of the black-box function

    : In many practical optimization problems, it may be difficult to ascertain the possible range of values the function can take, and the range of values would vary across applications. On the other hand, neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will then adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training

    [27]. We, therefore, propose incremental normalization that dynamically normalizes the output (response) from the black-box function using the response values observed so far before the value is passed as an input to the RNN, and observe significant improvements in terms of regret by doing so.

  3. Incorporate domain-constraints: Any practical optimization problem has a set of constraints on the input parameters. It is important that the RNN optimizer is penalized when it proposes query points outside the desired limits. We introduce a mechanism to achieve this by giving an additional feedback to the RNN whenever it proposes a query that violates domain constraints. In addition to regret-based loss, RNN is also trained to simultaneously minimize domain constraint violations. We show that an RNN optimizer trained in this manner attains lower regret values in fewer steps when subjected to domain constraints compared to an RNN optimizer not explicitly trained to utilize feedback.

We refer to the proposed approach as RNN-Opt. As a result of the above considerations, RNN-Opt can deal with an unknown range of function values and also incorporate domain constraints. We demonstrate that RNN-Opt works well on optimizing unseen benchmark black-box functions and outperforms RNN-OI in terms of the optimal value attained under a limited budget for 2-dimensional and 6-dimensional input spaces. We also perform extensive ablation experiments demonstrating the importance of each of the above-stated features in RNN-Opt.

The rest of the paper is organized as follows: We contrast our work to existing literature in Section 2, followed by defining the problem in Section 3. We present the details of our approach in Section 4, followed by experimental evaluation in Section 5, and conclude in Section 6.

2 Related Work

Our work falls under the category of real-parameter black-box global optimization [21]. Traditional approaches for black-box optimization like covariance matrix adaptation evolution strategy (CMA-ES) [8], Nelder-Mead [20]

, and Particle Swarm Optimization (PSO)

[15]

hand-design rules using heuristics (e.g. using nature-inspired genetic algorithms) to decide the next query point(s) given the observations made so far. Another category of approaches for global optimization of black-box functions include Bayesian optimization techniques

[4, 26, 25]. These approaches use observations (query and response) made thus far to approximate the black-box function via a surrogate (meta-) model, e.g. using a Gaussian Process [10], and then use this model to construct an acquisition function to decide the next query point. The acquisition function updates needed at each step are known to be costly [5].

Learned optimizers: There has been a recent interest in learning optimizers under the meta-learning setting [24] by training RNN optimizers via gradient descent. For example, [2] casts the design of an optimization algorithm as a learning problem and uses an LSTM model to learn an optimizer for a particular class of optimization problems, e.g. quadratic functions, training neural networks, etc. Similarly, [18, 7]

cast optimizer learning as learning a policy under a reinforcement learning setting.

[27] proposes a hierarchical RNN architecture to learn optimizers that scale well to optimize a large number of parameters (high-dimensional input space). However, the above meta-learning approaches for optimization assume the availability of gradient information to decide the next set of parameters, which is not available in the case of black-box optimization. Our work builds upon the meta-learning approach for learning black-box optimizers proposed in [5]. This approach mimics the sequential model-based Bayesian approaches in the sense that it proposes an RNN optimizer that stores sequential information about previous queries and responses, and accesses this memory to generate the next candidate query. RNN-OI mimics the Bayesian optimization based sequential decision-making process [4] (refer [5] for details) while being significantly faster than standard BBO algorithms like SMAC [11] and Spearmint [26] as it does not involve any matrix inversion or optimization of acquisition functions. RNN-OI was successfully tested on Gaussian process bandits, simple low dimensional controllers, and hyper-parameter tuning.

Handling domain constraints in neural networks

Recent work on Physics-guided deep learning

[13, 19] incorporates domain knowledge in the learning process via additional loss terms. Such approaches can be useful in our setting if the optimizer network is to be trained from scratch for a given application. However, the purpose of building a generic optimizer that can be transferred to new applications requires incorporating domain constraints in a posterior manner during inference time when the optimizer is suggesting query points. This is not only useful to adapt the same optimizer to a new application but also useful in another practical scenario of adapting to a new set of domain constraints for a given application. ThermalNet [6] uses a deep Q-network as an optimizer and uses an LSTM predictor for combustion optimization of a boiler in a power plant but does not handle domain constraints. Similar to our approach, ChemOpt [29] uses an RNN based optimizer for chemical reaction optimization but does not address aspects related to handling an unknown range for the function being optimized and incorporating domain constraints.

Handling unknown range of function values: Suitable scaling of input and output of hidden layers in neural networks has been shown to accelerate training of neural networks [12, 23, 3, 17]. Dynamic input scaling has been used in a similar setting as ours [27] to ensure that the neural network based optimizer is invariant to parameter scale. However, the scaling is applied to the average gradients. In our setting, we use a similar approach but apply dynamic scaling to the function evaluations being fed back as input to RNN-Opt.

3 Problem Overview

We consider learning an optimizer that can optimize (e.g., maximize) a black-box function , where is the domain of the input parameters. We assume that the function does not have a closed-form representation, is costly to evaluate, and does not allow the computation of gradients. In other words, the optimizer can query the function at a point to obtain a response , but it does not obtain any gradient information, and in particular it cannot make any assumptions on the analytical form of . The goal is to find within a limited budget, i.e. within a limited number of queries that can be made to the black-box.

We consider training an optimizer with parameters such that, given the queries and the corresponding responses from where , proposes the next query point under a budget constraint of queries, i.e. :

(1)

4 RNN-Opt

We model using an LSTM-based RNN. (For implementation, we use a variant of LSTMs as described in [28].) Recurrent Neural Networks (RNNs) with gated units such as Long Short Term Memory (LSTM) [9] units are a popular choice for sequence modeling to make predictions about future values given the past. They do so by maintaining a memory of all the relevant information from the sequence of inputs observed so far. In the meta-learning or training phase, a diverse set of synthetically-generated differentiable non-convex functions (refer Appendix 0.A) with known global optima are used to train the RNN (using gradient descent). The RNN is then used to predict the next query in order to intelligently explore the search space given the sequence of previous queries and the function responses. The RNN is expected to learn to retain any information about previous queries and responses that is relevant to proposing the next query to minimize the regret as shown in Fig. 1.

4.1 RNN-Opt without Domain Constraints

Given a trained RNN-based optimizer and a differentiable function , inference in RNN-Opt follows the following iterative process for : At each step , the output of the final recurrent hidden layer of the RNN is used to generate the output via an affine transformation to finally obtain .

(2)
(3)
(4)
(5)

where represents the RNN with parameters , is the function to be optimized, defines the affine transformation of the final output (hidden state) of the RNN. The parameters and together constitute . Instead of directly training to propose the next query as in [5]

, we use a stochastic RNN to estimate

and as in Equation 3, then sample

from a multivariate Gaussian distribution

. Introducing randomness in the query generation process leads to better exploration compared to a deterministic model [29]. The first query

is sampled from a uniform distribution over the domain of the function

to be optimized. Once the network is trained, can be replaced by any black-box function that takes -dimensional input.

For any synthetically generated function , we assume (approximate) can be found, e.g. using gradient-descent, since the closed form of the function is known. Hence, we assume that of given by is known. Therefore, it is easy to determine the regret after iterations (queries) to the function . We can then define a regret-based loss function as follows:

Figure 1: Computation flow in RNN-Opt. During training, the functions are differentiable and obtained using Equation 12. Once trained, is replaced by the black-box function .
(6)

where . Since the regret is expected to be high during initial iterations because of random initialization of but desired to be low close to , we give exponentially increasing importance to regret terms via a discount factor . In contrast to regret loss, OI loss used in RNN-OI is given by [5, 29]:

(7)

It is to be noted that using as the loss function mimics a supervised scenario where the target for each optimization task is known and explicitly used to guide the learning process. On the other hand, mimics an unsupervised scenario where the target is unknown and the learning process solely relies on the feedback about whether it is able to improve over iterations. It is important to note that once trained, the model requires neither nor during inference.

4.1.1 Incremental Normalization

We do not assume any constraint on the range of values the functions and can take. Although this feature is critical for most practical aspects, it poses a challenge on the training and inference procedures using RNN: Neural networks are known to work well only on normalized inputs, and can be numerically unstable and difficult to train on very large or very small values as typical non-linear activation functions like sigmoid activation function tend to saturate for large inputs and will adjust slowly during training. RNNs are most easily trained when their inputs are well conditioned, and have a similar scale as their latent state, and suitable scaling often accelerates training [12, 27]. This poses a challenge during both training and inference if we directly use as an input to the RNN.

Figure 2: Effect of not using suitable scaling (incremental normalization in our case) of black-box function value during inference.

Fig. 2 illustrates the saturation effect if suitable incremental normalization of function values is not used during inference. This behavior at inference time was noted111as per electronic correspondence with the authors in [5], however, was not considered while training RNN-OI. In order to deal with any range of values that can take during training or that can take during inference, we consider incremental normalization while training such that in Eq. 2 is replaced by such that , where , , and . (We used in our experiments).

4.2 RNN-Opt with Domain Constraints (RNN-Opt-DC)

Consider a constrained optimization problem of finding subject to constraints given by , where is the number of constraints. To ensure that the optimizer proposes queries that satisfy the domain constraints, or is at least able to receive feedback when it proposes a query that violates any domain constraints, we consider the following enhancements in RNN-Opt, as depicted in Fig. 3:

Figure 3: Computation flow in RNN-Opt-DC. Here is the function to be optimized, and is used to compute the penalty . Further, if , actual value of , i.e. is passed to the loss function and RNN, else is set to .

1. Input an explicit feedback via a penalty function s.t. to the RNN that captures the extent to which a proposed query violates any of the domain constraints. We consider the following instantiation of penalty function: , i.e. for any for which a penalty equal to is considered, while for any with the contribution to penalty is 0. The real-valued penalty captures the cumulative extent of violation as well. Further, similar to normalizing , we also normalize incrementally and use as an additional input to the RNN, such that:

(8)

Further, whenever , i.e. when one or more of the domain constraints are violated for the proposed query, we set rather than actually getting a response from the black-box. This is useful in practice: for example, when trying to optimize a complex dynamical system, getting a response from the system for such a query is not possible as it can be catastrophic.

2. During training, an additional domain constraint loss is considered that penalizes the optimizer if it proposes a query that does not satisfy one or more of the domain constraints.

(9)

The overall loss is then given by:

(10)

where controls how strictly the constraints on the domain of parameters should be enforced; higher implies stricter adherence to constraints. It is worth noting that the above formulation of incorporating domain constraints does not put any restriction on the number of constraints nor on the nature of constraints in the sense that the constraints can be linear or non-linear in nature. Further, complex non-linear constraints based on domain knowledge can also be incorporated in a similar fashion during training, e.g. as used in [13, 19]. Apart from optimizing (in our case, maximizing) , the optimizer is also being simultaneously trained to minimize .

4.2.1 Example of penalty function.

Consider simple limit constraints on the input parameters such that the domain of the function is given by , then we have:

(11)

where denotes the -th dimension of where and are the -th elements of and , respectively.

5 Experimental Evaluation

We conduct experiments to evaluate the following: i. regret loss () versus OI loss (), ii. effect of including incremental normalization during training, and iii. ability of RNN-Opt trained with domain constraints using (Eq. 10) to generate more feasible queries and leverage feedback to quickly adapt in case it proposes queries violating domain constraints.

For the unconstrained setting, we test RNN-Opt on i) standard benchmark functions for and , and ii) 1280 synthetically generated GMM-DF functions (refer Appendix 0.A) not seen during training. We choose the benchmark functions such as Goldstein, Rosenbrock, and Rastrigin (and the simple spherical function) that are known to be challenging for standard optimization methods. None of these functions were used for training any of the optimizers.

We use regret to measure the performance of any optimizer after iterations, i.e. after proposing queries. Lower values of indicate superior optimizer performance. We test all the optimizers under limited budget setting such that . For each test function, the first query is randomly sampled from , and we report average regret over 1280 random initializations. For synthetically generated GMM-DF functions, we report average regret over 1280 functions with one random initialization for each.

All RNN-based optimizers (refer Table 1) were trained for 8000 iterations using Adam optimizer [16] with an initial learning rate of 0.005. The network consists of two hidden layers with the number of LSTM units in each layer being chosen from using a hold-out validation set of GMM-DF. Another set of 1280 randomly generated functions constitute the GMM-DF test set. An initial code base222https://github.com/lightingghost/chemopt

developed using Tensorflow

[1] was adapted to implement our algorithm. We used a batch size of 128, i.e. 128 randomly-sampled functions (refer Equation 12) are processed in one mini-batch for updating the parameters of LSTM.

Method Loss Inc. Norm. Domain Const. (DC)
Training Inference Training Inference
RNN-OI 1.0 N Y N N
RNN-Opt-Basic 0.98 N Y N N
RNN-Opt 0.98 Y Y N N
RNN-Opt-P 0.98 Y Y N Y
RNN-Opt-DC 0.98 Y Y Y Y
Table 1: Variants of trained optimizers considered. Each row corresponds to a method. Y/N denote whether a feature (incremental normalization or domain constraint) was considered (Y) or not (N) during training or inference in a particular method.

5.1 Observations

We make the following key observations for unconstrained optimization setting:

1. RNN-Opt is able to optimize black-box functions not seen during training, and hence, generalize. We compare RNN-Opt with RNN-OI and two standard black-box optimization algorithms CMA-ES [8] and Nelder-Mead [20]. RNN-OI uses , , and to get the next hidden state , which is then used to get (as in Eq 4), such that with OI loss as given in Eq. 7. From Fig. 4 (a)-(i), we observe that RNN-Opt outperforms all the baselines considered on most functions considered while being at least as good as the baselines in few remaining cases. Except for the simple convex spherical function, RNN-based optimizers outperform CMA-ES and Nelder-Mead under limited budget, i.e. with for and for . We observe that trained optimizers outperform CMA-ES and Nelder-Mead for higher-dimensional cases ( here, as also observed in [5, 29]).

(a) GMM-DF (d=2)
(b) Goldstein (d=2)
(c) Rastrigin (d=2)
(d) Rosenbrock (d=2)
(e) Spherical (d=2)
(f) GMM-DF (d=6)
(g) Rastrigin (d=6)
(h) Rosenbrock (d=6)
(i) Spherical (d=6)
(j) GMM-DF (d=2)
(k) GMM-DF (d=6)
Figure 4: (a)-(i) RNN-Opt versus CMA-ES, Nelder-Mead and RNN-OI for benchmark functions for and . (j)-(k) Regret loss versus OI Loss with varying discount factor mentioned in brackets in the legend. (Lower regret is better.)

2. Regret-based loss is better than the OI loss. We compare RNN-Opt-Basic with RNN-OI (refer Table 1) where RNN-Opt-Basic differs from RNN-OI only in the loss function (and the discount factor, as discussed in next point). For fair comparison with RNN-OI, RNN-Opt-Basic does not include incremental normalization during training. From Fig. 4 (j)-(k), we observe that RNN-Opt-Basic (with ) performs better than RNN-OI during initial steps for (while being comparable eventually) and across all steps for , proving the advantage of using regret loss over OI loss.

3. Significance of discount factor when using regret-based loss versus OI loss. From Fig. 4 (j)-(k), we also observe that the results of RNN-Opt and RNN-OI are sensitive to the discount factor (refer Eqs. 6 and 7). works better for RNN-Opt while (i.e. no discount) works better for RNN-OI. This can be explained as follows: the queries proposed initially (small ) are expected to be far from due to random initialization, and therefore, have high initial regret. Hence, components of the loss term for smaller should be given lower weightage in the regret-based loss. On the other hand, during later steps (close to ), we would like the regret to be as low as possible, and hence a higher importance should be given to the corresponding terms in the regret-based loss. In contrast, RNN-OI is trained to keep improving irrespective of , and hence giving equal importance to the contribution of each step to the OI loss works best.

4. Incremental normalization during training and inference to optimize functions with diverse range of values. We compare RNN-Opt-Basic and RNN-Opt, where RNN-Opt uses incremental normalization of inputs during training as well as testing (as described in Section 4.1.1) while RNN-Opt-Basic uses incremental normalization only during testing (refer Table 1). From Fig. 5, we observe that RNN-Opt performs significantly better than RNN-Opt-Basic proving the advantage of incorporating incremental normalization during training. Note that since most of the functions considered have large range of values, incremental normalization is by-default enabled for all RNN-based optimizers during testing to obtain meaningful results, as illustrated earlier in Fig. 2, especially for functions with large range, e.g. Rosenbrock.

(a) GMM-DF (d=2)
(b) Rosenbrock (d=2)
(c) GMM-DF (d=6)
(d) Rosenbrock (d=6)
Figure 5: Regret plots showing effect of incremental normalization in RNN-Opt. Similar results are observed for all functions. We omit them here for brevity.

5.2 RNN-Opt with Domain Constraints

To train RNN-Opt-DC, we generate synthetic functions with random limit constraints as explained in Section 4.2.1. The limits of the search space are set as where (-th component of ) is sampled from (we use , during training).

We use for RNN-Opt-DC. As a baseline, we use RNN-Opt with minor variation during inference time (with no change in training procedure) where, instead of passing as input to the RNN, we pass so as to capture penalty feedback. We call this baseline approach as RNN-Opt-P (refer Table 1). While RNN-Opt-DC is explicitly trained to minimize penalty explicitly, RNN-Opt-P captures the requirement of trying to maximize under a soft-constraint of minimizing only during inference time.

We use the standard quadratic (disk) constraint used to evaluate constrained optimization approaches, i.e. (we use ) for Rosenbrock function. For GMM-DF, we generate random limit constraints on each dimension around the global optima, s.t. the optimal solution is still the same as the one without constraints, while the feasible search space varies randomly across functions. Limits of the domain is , where (-th component of ) is sampled from (we use , ). We also consider two instances of (anonymized) non-linear surrogate model for a real-world industrial process built by subject-matter experts with six controllable input parameters () as black-box functions, referred to as Industrial-1 and Industrial-2 in Fig. 6. This process imposes limit constraints on all six parameters guided by domain-knowledge. The ground-truth optimal value for these functions was obtained by querying the surrogate model  200k times via grid search. The regret results are averaged over runs assuming diverse environmental conditions.

(a) GMM-DF (d=2)
(b) Rosenbrock (d=2)
(c) GMM-DF (d=6)
(d) Rosenbrock (d=6)
(e) Industrial-1 (d=6)
(f) Industrial-2 (d=6)
Figure 6: Regret plots comparing RNN-Opt-DC (DC) and RNN-Opt-P (P). The entries in the brackets denote values for for GMM-DF, and for Rosenbrock.

RNN-Opt-DC and RNN-Opt-P are not guaranteed to propose feasible queries at all steps because of the soft constraints during training and/or inference. Therefore, despite training the optimizers for steps we unroll the RNNs up to a maximum of steps and take the first proposed queries that are feasible, i.e. satisfy domain constraints. For functions where optimizer is not able to propose feasible queries in steps, we replicate the regret corresponding to best solution for remaining steps. As shown in Fig. 6, we observe that RNN-Opt with domain constraints, namely, RNN-Opt-DC is able to effectively use explicit penalty feedback, and at least as good as RNN-Opt-P in all cases. As expected, we also observe that the performance of both optimizers degrades with increasing values of or as the search space to be explored by the optimizer increases.

6 Conclusion and Future Work

Learning optimization algorithms under the meta-learning paradigm is an area of active research. In this work, we have shown that using regret directly as a loss for training optimizers using recurrent neural networks is possible, and that it yields better optimizers than those obtained using observed-improvement based loss. We have proposed useful extensions of practical importance to optimization algorithms for black-box optimization that allow dealing with diverse range of function values and handle domain constraints more effectively. One shortcoming of this approach is that a different optimizer needs to be trained for varying number of input parameters. In future, we plan to extend this work to train optimizers that can ingest input with varying and high number of parameters, e.g. by first proposing a change in a latent space and then estimating changes in actual input space as in [22, 27]. Further, training optimizers for multi-objective optimization can be a useful extension.

Appendix 0.A Generating Diverse Non-Convex Synthetic Functions

We generate synthetic non-convex continuous functions defined over

via a Gaussian Mixture Model density function (GMM-DF, similar to

[29]):

(12)

In this work, we used GMM-DF instead of Gaussian Processes used in [5] for ease of implementation and faster response time to queries:

Figure 7: Sample synthetic GMM density functions for .

Functions obtained in this manner are often non-convex and have multiple local minima/maxima. Sample plots for functions obtained over 2-D input space are shown in Fig. 7. We use , and for , and for in our experiments (all covariance matrices are diagonal).

For any function , we use an estimated value () instead of . This assumes that the global maximum of the function is at the mean of one of the Gaussian components. We validate this assumption by obtaining better estimates of the ground truth for via grid search over randomly sampled 0.2M query points over the domain of

. For 10k randomly sampled GMM-DF functions, we obtained an average error of 0.03 with standard deviation of 0.02 in estimating

, suggesting that the assumption is reasonable, and in practice, approximate values of suffice to estimate the regret values for supervision. However, in general, can also be obtained using gradient descent on .

References