Probabilistic Matrix Factorization for Automated Machine Learning

05/15/2017 ∙ by Nicolo Fusi, et al. ∙ 0

In order to achieve state-of-the-art performance, modern machine learning techniques require careful data pre-processing and hyperparameter tuning. Moreover, given the ever increasing number of machine learning models being developed, model selection is becoming increasingly important. Automating the selection and tuning of machine learning pipelines consisting of data pre-processing methods and machine learning models, has long been one of the goals of the machine learning community. In this paper, we tackle this meta-learning task by combining ideas from collaborative filtering and Bayesian optimization. Using probabilistic matrix factorization techniques and acquisition functions from Bayesian optimization, we exploit experiments performed in hundreds of different datasets to guide the exploration of the space of possible pipelines. In our experiments, we show that our approach quickly identifies high-performing pipelines across a wide range of datasets, significantly outperforming the current state-of-the-art.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning models often depend on hyperparameters that require extensive fine-tuning in order to achieve optimal performance. For example, state-of-the-art deep neural networks have highly tuned architectures and require careful initialization of the weights and learning algorithm (for example, by setting the initial learning rate and various decay parameters). These hyperparameters can be learned by cross-validation (or holdout set performance) over a grid of values, or by randomly sampling the hyperparameter space

(Bergstra & Bengio, 2012); but, these approaches do not take advantage of any continuity in the parameter space. More recently, Bayesian optimization has emerged as a promising alternative to these approaches (Srinivas et al., 2009; Hutter et al., 2011; Osborne et al., 2009; Bergstra et al., 2011; Snoek et al., 2012; Bergstra et al., 2013). In Bayesian optimization, the loss (e.g.

root mean square error) is modeled as a function of the hyperparameters. A regression model (usually a Gaussian process) and an acquisition function are then used to iteratively decide which hyperparameter setting should be evaluated next. More formally, the goal of Bayesian optimization is to find the vector of hyperparameters

that corresponds to

Figure 1: Two-dimensional embedding of 5,000 ML pipelines across 576 OpenML datasets. Each point corresponds to a pipeline and is colored by the AUROC obtained by that pipeline in one of the OpenML datasets (OpenML dataset id 943).

where are the predictions generated by a machine learning model (e.g.

SVM, random forest, etc.) with hyperparameters

on some inputs ; are the targets/labels and

is a loss function. Usually, the hyperparameters are a subset of

, although in practice many hyperparameters can be discrete (e.g. the number of layers in a neural network) or categorical (e.g.

the loss function to use in a gradient boosted regression tree).

Bayesian optimization techniques have been shown to be very effective in practice and sometimes identify better hyperparameters than human experts, leading to state-of-the-art performance in computer vision tasks

(Snoek et al., 2012). One drawback of these techniques is that they are known to suffer in high-dimensional hyperparameter spaces, where they often perform comparably to random search (Li et al., 2016b). This limitation has both been shown in practice (Li et al., 2016b), as well as studied theoretically (Srinivas et al., 2009; Grünewälder et al., 2010)

and is due to the necessity of sampling enough hyperparameter configurations to get a good estimate of the predictive posterior over a high-dimensional space. In practice, this is not an insurmountable obstacle to the fine-tuning of a handful of parameters in a single model, but it is becoming increasingly impractical as the focus of the community shifts from tuning individual hyperparameters to identifying entire ML pipelines consisting of data pre-processing methods, machine learning models

and their parameters (Feurer et al., 2015).

Our goal in this paper is indeed not only to tune the hyperparameters of a given model, but also to identify which model to use and how to pre-process the data. We do so by leveraging experiments already performed across different datasets to solve the optimization problem

where is the ML model with hyperparameters and is the pre-processing method with hyperparameters . In the rest of the paper, we refer to the combination of pre-processing method, machine learning model and their hyperparameters as a machine learning pipeline. Some of the dimensions in ML pipeline space are continuous, some are discrete, some are categorical (e.g. the “model” dimension can be a choice between a random forest or an SVM), and some are conditioned on another dimension (e.g. “the number of trees” dimension in a random forest). The mixture of discrete, continuous and conditional dimensions in ML pipelines makes modeling continuity in this space particularly challenging. For this reason, unlike previous work, we consider “instantiations” of pipelines, meaning that we fix the set of pipelines ahead of training. For example, an instantiated pipeline can consist in computing the top 5 principal components of the input data and then applying a random forest with 1000 trees. Extensive experiments in section 4 demonstrate that this discretization of the space actually leads to better performance than models that attempt to model continuity.

We show that the problem of predicting the performance of ML pipelines on a new dataset can be cast as a collaborative filtering problem that can be solved with probabilistic matrix factorization techniques. The approach we follow in the rest of this paper, based on Gaussian process latent variable models (Lawrence & Urtasun, 2009; Lawrence, 2005), embeds different pipelines in a latent space based on their performance across different datasets. For example, Figure 1 shows the first two dimensions of the latent space of ML pipelines identified by our model on OpenML (Vanschoren et al., 2013) datasets. Each dot corresponds to an ML pipeline and is colored depending on the AUROC achieved on a holdout set for a given OpenML dataset. Since our probabilistic approach produces a full predictive posterior distribution over the performance of the ML pipelines considered, we can use it in conjunction with acquisition functions commonly used in Bayesian optimization to guide the exploration of the ML pipeline space. Through extensive experiments, we show that our method significantly outperforms the current state-of-the-art in automated machine learning in the vast majority of datasets we considered.

2 Related work

The concept of leveraging experiments performed in previous problem instances has been explored in different ways by two different communities. In the Bayesian optimization community, most of the work revolves around either casting this problem as an instance of multi-task learning or by selecting the first parameter settings to evaluate on a new dataset by looking at what worked in related datasets (we will refer to this as meta-learning for cold-start). In the multi-task setting, Swersky et al. (2013) have proposed a multi-task Bayesian optimization approach leveraging multiple related datasets in order to find the best hyperparameter setting for a new task. For instance, they suggested using a smaller dataset to tune the hyperparameters of a bigger dataset that is more expensive to evaluate. Schilling et al. (2015) also treat this problem as an instance of multi-task learning, but instead of treating each dataset as a separate task (or output), they effectively consider the tasks as conditionally independent given an indicator variable specifying which dataset was used to run a given experiment. Springenberg et al. (2016) do something similar with Bayesian neural networks, but instead of passing an indicator variable, their approach learns a dataset-specific embedding vector. Perrone et al. (2017) also effectively learn a task-specific embedding, but instead of using Bayesian neural networks end-to-end like in (Springenberg et al., 2016)

, they use feed-forward neural networks to learn the basis functions of a Bayesian linear regression model.

Other approaches address the cold-start problem by evaluating parameter settings that worked well in previous datasets. The most successful attempt to do so for automated machine learning problems (i.e. in very high-dimensional and structured parameter spaces) is the work by Feurer et al. (2015). In their paper, the authors compute meta-features of both the dataset under examination as well as a variety of OpenML (Vanschoren et al., 2013) datasets. These meta-features include for example the number of classes or the number of samples in each dataset. They measure similarity between datasets by computing the L1 norm of the meta-features and use the optimization runs from the nearest datasets to warm-start the optimization. Reif et al. (2012)

also use meta-features of the dataset to cold-start the optimization performed by a genetic algorithm.

Wistuba et al. (2015) focus on hyperparameter optimization and extend the approach presented in (Feurer et al., 2015) by also taking into account the performance of hyperparameter configurations evaluated on the new dataset. In the same paper, they also propose to carefully pick these evaluations such that the similarity between datasets is more accurately represented, although they found that this doesn’t result in improved performance in their experiments.

Other related work has been produced in the context of algorithm selection for satisfiability problems. In particular, Stern et al. (2010) tackled constraint solving problems and combinatorial auction winner determination problems using a latent variable model to select which algorithm to use. Their model performs a joint linear embedding of problem instances and experts (e.g. different SAT solvers) based on their meta-features and a sparse matrix containing the results of previous algorithm runs. Malitsky & O’Sullivan (2014) also proposed to learn a latent variable model by decomposing the matrix containing the performance of each solver on each problem. They then develop a model to project commonly used hand-crafted meta-features used to select algorithms onto the latent space identified by their model. They use this last model to do one-shot (i.e. non-iterative) algorithm selection. This is similar to what was done by Mısır & Sebag (2017), but they do not use the second regression model and instead perform one-shot algorithm selection directly.

Our work is most related to (Feurer et al., 2015) in terms of scope (i.e. joint automated pre-processing, model selection and hyperparameter tuning), but we discretize the space and set up a multi-task model, while they capture continuity in parameter space in a single-task model with a smart initialization. Our approach is also loosely related to the work of Stern et al. (2010), but we perform sequential model based optimization with a non-linear mapping between latent and observed space in an unsupervised model, while they use a supervised linear model trained on ranks for one-shot algorithm selection. The application domain of their model also required a different utility function and a time-based feedback model.

3 AutoML as probabilistic matrix factorization

In this paper, we develop a method that can draw information from all of the datasets for which experiments are available, whether they are immediately related (e.g. a smaller version of the current dataset) or not. The idea behind our approach is that if two datasets have similar (i.e. correlated) results for a few pipelines, it’s likely that the remaining pipelines will produce results that are similar as well. This is somewhat reminiscent of a collaborative filtering problem for movie recommendation, where if two users liked the same movies in the past, it’s more likely that they will like similar ones in the future.

More formally, given machine learning pipelines and datasets, we train each pipeline on part of each dataset and we evaluate it on an holdout set. This gives us a matrix summarizing the performance of each pipeline in each dataset. In the rest of the paper, we will assume that is a matrix of balanced accuracies (see e.g., (Guyon et al., 2015)), and that we want to maximize the balanced accuracy for a new dataset; but, our approach can be used with any loss function (e.g. RMSE, balanced error rate, etc.). Having observed the performance of different pipelines on different datasets, the task of predicting the performance of any of them on a new dataset can be cast as a matrix factorization problem.

Specifically, we are seeking a low rank decomposition such that , where and , where is the dimensionality of the latent space. As done in Lawrence & Urtasun (2009) and Salakhutdinov & Mnih (2008), we consider the probabilistic version of this task, known as probabilistic matrix factorization


where is a row of the latent variables and is a vector of measured performances for pipeline . In this setting both and are unknown and must be inferred.

3.1 Non-linear matrix factorization with Gaussian process priors

The probabilistic matrix factorization approach just introduced assumes that the entries of Y are linearly related to the latent variables. In nonlinear probabilistic matrix factorization (Lawrence & Urtasun, 2009), the elements of are given by a nonlinear function of the latent variables, , where is independent Gaussian noise. This gives a likelihood of the form


Following Lawrence & Urtasun (2009), we place a Gaussian process prior over so that any vector is governed by a joint Gaussian density, where is a covariance matrix, and the elements encode the degree of correlation between two samples as a function of the latent variables. If we use the covariance function , which is a prior corresponding to linear functions, we recover a model equivalent to (1). Alternatively, we can choose a prior over non-linear functions, such as a squared exponential covariance function with automatic relevance determination (ARD, one length-scale per dimension),



is a variance (or amplitude) parameter and

are length-scales. The squared exponential covariance function is infinitely differentiable and hence is a prior over very smooth functions. In practice, such a strong smoothness assumption can be unrealistic and is the reason why the Matern class of kernels is sometimes preferred (Williams & Rasmussen, 2006). In the rest of this paper we use the squared exponential kernel and leave the investigation of the performance of Matern kernels to future work.

After specifying a GP prior, the marginal likelihood is obtained by integrating out the function under the prior


where .

In principle, we could add metadata about the pipelines and/or the datasets by adding additional kernels. As we discuss in section 4 and show in Figures 2 and 3, we didn’t find this to help in practice, since the latent variable model is able to capture all the necessary information even in the fully unsupervised setting.

Figure 2: Latent embeddings of 42,000 machine learning pipelines colored according to which model was included in each pipeline. These are paired plots of the first 5 dimensions of our 20-dimensional latent space. The latent space effectively captures structure in the space of models.

3.2 Inference with missing data

Running multiple pipelines on multiple datasets is an embarrassingly parallel operation, and our proposed method readily takes advantage of these kinds of computationally cheap observations. However, in applications where it is expensive to gather such observations, will be a sparse matrix, and it becomes necessary to be able to perform inference with missing data. Given that the marginal likelihood in (4

) follows a multivariate Gaussian distribution, marginalizing over missing values is straightforward and simply requires ”dropping” the missing observations from the mean and covariance. More formally, we define an indexing function

that given a dataset index returns the list of pipelines that have been evaluated on . We can then rewrite (4) as


where .

As done in Lawrence & Urtasun (2009), we infer the parameters and latent variables

by minimizing the log-likelihood using stochastic gradient descent. We do so by presenting the entries

one at a time and updating and for each dataset . The negative log-likelihood of the model can be written as


where is the number of pipelines evaluated for dataset . For every dataset we update the global parameters as well as the latent variables by evaluating at the -th iteration:


where is the learning rate.

3.3 Predictions

Predictions from the model can be easily computed by following the standard derivations for Gaussian process (Williams & Rasmussen, 2006) regression. The predicted performance of pipeline for a new dataset is given by


remembering that and defining and .

The computational complexity for generating these predictions is largely determined by the number of pipelines already evaluated for a test dataset and is due to the inversion of a matrix. This is not particularly onerous because the typical number of evaluations is likely to be in the hundreds, given the cost of training each pipeline and the risk of overfitting to the validation set if too many pipelines are evaluated.

3.4 Acquisition functions

The model described so far can be used to predict the expected performance of each ML pipeline as a function of the pipelines already evaluated, but does not yet give any guidance as to which pipeline should be tried next. A simple approach to pick the next pipeline to evaluate is to iteratively pick the maximum predicted performance , but such a utility function, also known as an acquisition function, would discard information about the uncertainty of the predictions. One of the most widely used acquisition functions is expected improvement (EI) (Močkus, 1975), which is given by the expectation of the improvement function

where is the best result observed. Since is Gaussian distributed (see (9)), this expectation can be computed analytically


is the cumulative distribution function of the standard normal and

is defined as

where is a free parameter to encourage exploration. After computing the expected improvement for each pipeline, the next pipeline to evaluate is simply given by

The expected improvement is just one of many possible acquisition functions, and different problems may require different acquisition functions. See (Shahriari et al., 2016) for a review.

Figure 3: Latent embedding of all the pipelines in which PCA is included as a pre-processor. Each point is colored according to the percentage of variance retained by PCA (i.e. the hyperparameter of interest when tuning PCA in ML pipelines).

4 Experiments

In this section, we compare our method to a series of baselines as well as to auto-sklearn (Feurer et al., 2015), the current state-of-the-art approach and overall winner of the ChaLearn AutoML competition (Guyon et al., 2016). We ran all of the experiments on 553 OpenML (Vanschoren et al., 2013) datasets, selected by filtering for binary and multi-class classification problems with no more than samples and no missing values, although our method is capable of handling datasets which cause ML pipeline runs to be unsuccessful (described below).

4.1 Generation of training data

We generated training data for our method by splitting each OpenML dataset in 80% training data, 10% validation data and 10% test data, running ML pipelines on each dataset and measuring the balanced accuracy (i.e. accuracy rescaled such that random performance is 0 and perfect performance is 1.0).

We generated the pipelines by sampling a combination of pre-processors , machine learning models , and their corresponding hyperparameters and from the entries in Supplementary Table 1. All the models and pre-processing methods we considered were implemented in scikit-learn (Pedregosa et al., 2011). We sampled the parameter space by using functions provided in the auto-sklearn library (Feurer et al., 2015). Similar to what was done in (Feurer et al., 2015), we limited the maximum training time of each individual model within a pipeline to 30 seconds and its memory consumption to 16GB. Because of network failures and the cluster occasionally running out of memory, the resulting matrix was not fully sampled and had approximately missing entries. As pointed out in the previous section, this is expected in realistic applications and is not a problem for our method, since it can easily handle sparse data.

Out of the 553 total datasets, we selected 100 of them as a held out test set. We found that some of the OpenML datasets are so easy to model, that most of the machine learning pipelines we tried worked equally well. Since this could swamp any difference between the different methods we were evaluating, we chose our test set taking into consideration the difficulty of each dataset. We did so by randomly drawing without replacement each dataset with probabilities proportional to how poorly random selection performed on it. Specifically, for each dataset, we ran random search for 300 iterations and recorded the regret. The probability of selecting a dataset was then proportional to the regret on that dataset, averaged over 100 trials of random selection. After removing OpenML datasets that were used to train auto-sklearn, the final size of the held out test set was 89. The training set consisted of the remaining 464 datasets (the IDs of both training and test sets are provided in the supplementary material).

4.2 Parameter settings

We set the number of latent dimensions to , learning rate to

, and (column) batch-size to 50. The latent space was initialized using PCA, and training was run for 300 epochs (corresponding to approximately 3 hours on a 16-core Azure machine). Finally, we configured the acquisition function with


Figure 4:

Average rank of all the approaches we considered as a function of the number of iterations. For each holdout dataset, the methods are ranked based on the balanced accuracy obtained on the validation set at each iteration. The ranks are then averaged across datasets. Lower is better. The shaded areas represent the standard error for each method.

4.3 Results

We compared the model described in this paper, PMF, to the following methods:

  • Random. For each test dataset, we performed a random search by sampling each pipeline to be evaluated from the set of 42,000 at random without replacement.

  • Random 2x. Same as above, but with twice the budget. This simulates parallel evaluation of pipelines and is a strong baseline (Li et al., 2016a).

  • Random 4x. Same as a above but with 4 times the budget.

  • auto-sklearn (Feurer et al., 2015). We ran auto-sklearn for 4 hours per dataset and set to optimize balanced accuracy on a holdout set. We disabled the automated ensembling of models in order to obtain a fair comparison to the other non-ensembling methods.

Our method uses the same procedure used in (Feurer et al., 2015) to “warm-start” the process by selecting the first 5 pipelines, after which the acquisition function selects subsequent pipelines.

Figure 4 shows the average rank for each method as a function of the number of iterations (i.e. the number of pipelines evaluated). Starting from the first iteration, our approach consistently achieves the best average rank. Auto-sklearn is the second best model, outperforming random 2x and almost matched by random 4x. Please note that random 2x and random 4x are only intended as baselines that are easy to understand and interpret, but that in no way can be considered practical solutions, since they both have a much larger computational budget than the non-baseline methods.

Figure 5: Difference between the maximum balanced accuracy observed on the test set and the balanced accuracy obtained by each method at each iteration. Lower is better. The shaded areas represent the standard error for each method.
Figure 8:

(a) Mean squared error (MSE) between predicted and observed balanced accuracies in the test set as a function of the number of iterations. Lower is better. MSE is averaged across all test datasets. (b) Posterior predictive variance as a function of the number of iterations and averaged across all test datasets. Shaded area shows two standard errors around the mean.

Rank plots such as Figure 4 are useful to understand the relative performance of a set of models, but they don’t give any information about the magnitude of the difference in performance. For this reason, we measured the difference between the maximum balanced accuracy obtained by any pipeline in each dataset and the one obtained by the pipeline selected at each iteration. The results summarized in Figure 5 show that our method still outperforms all the others. We also investigated how well our method performs when fewer observations/training datasets are available. In the first experiment, we ran our method in the setting where 90% of the entries in are missing. Supplementary Figures 1 and 2 demonstrate our method degrades in performance only slightly, but still results in the best performance amongst competitors. In the second experiment, we matched the number (and the identity, for the most part) of datasets that auto-sklearn uses to initialize its Bayesian optimization procedure. The results, shown in Supplementary Figures 3 and 4, confirm that our model outperforms competing approaches even when trained on a subset of the data.

Next, we investigated how quickly our model is able to improve its predictions as more pipelines are evaluated. Figure 8a shows the mean squared error computed across the test datasets as a function of the number of evaluations. As expected the error monotonically decreases and appears to asymptote after 200 iterations. Figure 8b shows the uncertainty of the model (specifically, the posterior variance) as a function of the number of evaluations. Overall, Figure 8 a and b support that as more evaluations are performed, the model becomes less uncertain and the accuracy of the predictions increases.

Including pipeline metadata. Our approach can easily incorporate information about the composition and the hyperparameters of the pipelines considered. This metadata could for example include information about which model is used within each pipeline or which pre-processor is applied to the data before passing it to the model. Empirically, we found that including this information in our model didn’t improve performance (data not shown). Indeed, our model is able to effectively capture most of this information in a completely unsupervised fashion, just by observing the sparse pipelines-dataset matrix . This is visible in Figure 2, where we show the latent embedding colored according to which model was included in which pipeline. On a finer scale, the latent space can also capture different settings of an individual hyperparameter. This is shown in Figure 3, where each pipeline is embedded in a 2-dimensional space and colored by the value of the hyperparameter of interest, in this case the percent of variance retained by a PCA preprocessor. Overall, our findings indicate that pipeline metadata is not needed by our model if enough experimental data (i.e. enough entries in matrix ) is available.

5 Discussion

We have presented a new approach to automatically build predictive ML pipelines for a given dataset, automating the selection of data pre-processing method and machine learning model as well as the tuning of their hyperparameters. Our approach combines techniques from collaborative filtering and ideas from Bayesian optimization to intelligently explore the space of ML pipelines, exploiting experiments performed in previous datasets. We have benchmarked our approach against the state-of-the-art in 89 OpenML datasets with different sample sizes, number of features and number of classes. Overall, our results show that our approach outperforms both the state-of-the-art as well as a set of strong baselines.

One potential concern with our method is that it requires sampling (i.e. instantiating pipelines) from a potentially high-dimensional space and thus could require exponentially many samples in order to explore all areas of this space. We have found this not to be a problem for three reasons. First, many of the dimensions in the space of pipelines are conditioned on the choice of other dimensions. For example, the number of trees or depth of a random forest are parameters that are only relevant if a random forest is chosen in the “model” dimension. This reduces the effective search space significantly. Second, in our model we treat every pipeline as an additional sample, so increasing the sampling density also results in an increase in sample size (and similarly, adding a dataset also increases the effective sample size). Finally, very dense sampling of the pipeline space is only needed if the performance is very sensitive to small parameter changes, something that we haven’t observed in practice. If this is a concern, we advise using our approach in conjunction with traditional Bayesian optimization methods (such as (Snoek et al., 2012)) to further fine-tune the parameters.

We are currently investigating several extensions of this work. First, we would like to include dataset-specific information in our model. As discussed in section 3, the only data taken into account by our model is the performance of each method in each dataset. Similarity between different pipelines is induced by having correlated performance across multiple datasets, and ignores potentially relevant metadata about datasets, such as the sample size or number of classes. We are currently working on including such information by extending our model using additional kernels and dual embeddings (i.e. embedding both pipelines and dataset in separate latent spaces). Second, we are interested in using acquisition functions that include a factor representing the computational cost of running a given pipeline (Snoek et al., 2012)

to handle instances when datasets have a large number of samples. The machine learning models we used for our experiments were constrained not to exceed a certain runtime, but this could be impractical in real applications. Finally, we are planning to experiment with different probabilistic matrix factorization models based on variational autoencoders.


  • Bergstra & Bengio (2012) Bergstra, James and Bengio, Yoshua. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13(Feb):281–305, 2012.
  • Bergstra et al. (2013) Bergstra, James, Yamins, Daniel, and Cox, David D. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. Proceedings of the International Conference on Machine Learning, 28:115–123, 2013.
  • Bergstra et al. (2011) Bergstra, James S, Bardenet, Rémi, Bengio, Yoshua, and Kégl, Balázs. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pp. 2546–2554, 2011.
  • Feurer et al. (2015) Feurer, Matthias, Klein, Aaron, Eggensperger, Katharina, Springenberg, Jost, Blum, Manuel, and Hutter, Frank. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems, pp. 2962–2970, 2015.
  • Grünewälder et al. (2010) Grünewälder, Steffen, Audibert, Jean-Yves, Opper, Manfred, and Shawe-Taylor, John. Regret bounds for Gaussian process bandit problems. In

    Proceedings of the 13th International Conference on Artificial Intelligence and Statistics

    , pp. 273–280, 2010.
  • Guyon et al. (2015) Guyon, Isabelle, Bennett, Kristin, Cawley, Gavin, Escalante, Hugo Jair, Escalera, Sergio, Ho, Tin Kam, Macia, Núria, Ray, Bisakha, Saeed, Mehreen, Statnikov, Alexander, et al. Design of the 2015 chalearn automl challenge. In Neural Networks (IJCNN), 2015 International Joint Conference on, pp. 1–8. IEEE, 2015.
  • Guyon et al. (2016) Guyon, Isabelle, Chaabane, Imad, Escalante, Hugo Jair, Escalera, Sergio, Jajetic, Damir, Lloyd, James Robert, Macià, Núria, Ray, Bisakha, Romaszko, Lukasz, Sebag, Michèle, Statnikov, Alexander, Treguer, Sébastien, and Viegas, Evelyne. A brief review of the chalearn automl challenge: Any-time any-dataset learning without human intervention. In Hutter, Frank, Kotthoff, Lars, and Vanschoren, Joaquin (eds.), Proceedings of the Workshop on Automatic Machine Learning, volume 64 of Proceedings of Machine Learning Research, pp. 21–30, New York, New York, USA, 24 Jun 2016. PMLR. URL
  • Hutter et al. (2011) Hutter, Frank, Hoos, Holger H, and Leyton-Brown, Kevin. Sequential model-based optimization for general algorithm configuration. In International Conference on Learning and Intelligent Optimization, pp. 507–523. Springer, 2011.
  • Lawrence (2005) Lawrence, Neil.

    Probabilistic non-linear principal component analysis with Gaussian process latent variable models.

    Journal of Machine Learning Research, 6(Nov):1783–1816, 2005.
  • Lawrence & Urtasun (2009) Lawrence, Neil and Urtasun, Raquel. Non-linear matrix factorization with Gaussian processes. Proceedings of the International Conference on Machine Learning, 2009.
  • Li et al. (2016a) Li, Lisha, Jamieson, Kevin, DeSalvo, Giulia, Rostamizadeh, Afshin, and Talwalkar, Ameet. Hyperband: A novel Bandit-Based approach to hyperparameter optimization. March 2016a.
  • Li et al. (2016b) Li, Lisha, Jamieson, Kevin, DeSalvo, Giulia, Rostamizadeh, Afshin, and Talwalkar, Ameet. Efficient hyperparameter optimization and infinitely many armed bandits. arXiv preprint arXiv:1603.06560, 2016b.
  • Malitsky & O’Sullivan (2014) Malitsky, Yuri and O’Sullivan, Barry. Latent features for algorithm selection. In Seventh Annual Symposium on Combinatorial Search, July 2014.
  • Mısır & Sebag (2017) Mısır, Mustafa and Sebag, Michèle. Alors: An algorithm recommender system. Artif. Intell., 244:291–314, March 2017.
  • Močkus (1975) Močkus, J. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference, pp. 400–404. Springer, 1975.
  • Osborne et al. (2009) Osborne, Michael A, Garnett, Roman, and Roberts, Stephen J. Gaussian processes for global optimization. In 3rd International Conference on Learning and Intelligent Optimization (LION3), pp. 1–15, 2009.
  • Pedregosa et al. (2011) Pedregosa, Fabian, Varoquaux, Gaël, Gramfort, Alexandre, Michel, Vincent, Thirion, Bertrand, Grisel, Olivier, Blondel, Mathieu, Prettenhofer, Peter, Weiss, Ron, Dubourg, Vincent, Vanderplas, Jake, Passos, Alexandre, Cournapeau, David, Brucher, Matthieu, Perrot, Matthieu, and Duchesnay, Édouard. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • Perrone et al. (2017) Perrone, Valerio, Jenatton, Rodolphe, Seeger, Matthias, and Archambeau, Cedric. Multiple adaptive bayesian linear regression for scalable bayesian optimization with warm start. December 2017.
  • Reif et al. (2012) Reif, Matthias, Shafait, Faisal, and Dengel, Andreas.

    Meta-learning for evolutionary parameter optimization of classifiers.

    Mach. Learn., 87(3):357–380, June 2012.
  • Salakhutdinov & Mnih (2008) Salakhutdinov, Ruslan and Mnih, Andriy.

    Bayesian probabilistic matrix factorization using Markov chain Monte Carlo.

    In Proceedings of the 25th International Conference on Machine Learning, pp. 880–887, 2008.
  • Schilling et al. (2015) Schilling, Nicolas, Wistuba, Martin, Drumond, Lucas, and Schmidt-Thieme, Lars.

    Hyperparameter optimization with factorized multilayer perceptrons.

    In Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pp. 87–103. Springer, Cham, September 2015.
  • Shahriari et al. (2016) Shahriari, Bobak, Swersky, Kevin, Wang, Ziyu, Adams, Ryan P, and de Freitas, Nando. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016.
  • Snoek et al. (2012) Snoek, Jasper, Larochelle, Hugo, and Adams, Ryan P. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, pp. 2951–2959, 2012.
  • Springenberg et al. (2016) Springenberg, Jost Tobias, Klein, Aaron, Falkner, Stefan, and Hutter, Frank. Bayesian optimization with robust bayesian neural networks. In Lee, D D, Sugiyama, M, Luxburg, U V, Guyon, I, and Garnett, R (eds.), Advances in Neural Information Processing Systems 29, pp. 4134–4142. Curran Associates, Inc., 2016.
  • Srinivas et al. (2009) Srinivas, Niranjan, Krause, Andreas, Kakade, Sham M, and Seeger, Matthias. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
  • Stern et al. (2010) Stern, D H, Samulowitz, H, Herbrich, R, Graepel, T, and others. Collaborative expert portfolio management. AAAI, 2010.
  • Swersky et al. (2013) Swersky, Kevin, Snoek, Jasper, and Adams, Ryan P. Multi-task Bayesian optimization. In Advances in Neural Information Processing Systems, pp. 2004–2012, 2013.
  • Vanschoren et al. (2013) Vanschoren, Joaquin, van Rijn, Jan N., Bischl, Bernd, and Torgo, Luis. OpenML: Networked science in machine learning. SIGKDD Explorations, 15(2):49–60, 2013.
  • Williams & Rasmussen (2006) Williams, Christopher KI and Rasmussen, Carl Edward. Gaussian processes for machine learning. The MIT Press, Cambridge, MA, USA, 2006.
  • Wistuba et al. (2015) Wistuba, Martin, Schilling, Nicolas, and Schmidt-Thieme, Lars. Learning data set similarities for hyperparameter optimization initializations. In MetaSel@ PKDD/ECML, pp. 15–26, 2015.