While in the standard supervised learning problems we seek the best hypothesis in a given space and with a given learning algorithm, in hyperparameter optimization (HO) and meta-learning (ML) we seek a configuration so that the optimized learning algorithm will produce a model that generalizes well to new data. The search space in ML often incorporates choices associated with the hypothesis space and the features of the learning algorithm itself (e.g., how optimization of the training loss is performed). Under this common perspective, both HO and ML essentially boil down tonesting two search problems: at the inner level we seek a good hypothesis as in standard supervised learning while at the outer level we seek a good configuration (including a good hypothesis space) where the inner search takes place.
HO and ML only differ substantially in terms of the experimental settings in which they are evaluated. While in HO the available data is associated with a single task and split222 Data could also be split multiple times, following a cross-validation scheme. into a training set, (used to tune the parameters) and a validation set, (used to tune the hyperparameters) in ML we are often interested in the so-called few-shot
learning setting where data comes in the form of short episodes sampled from a common probability distribution over supervised tasks. Algorithmic techniques for solving HO and ML can also differ substantially. Classic approaches to HO,(see e.g. Hutter et al., 2015, and references therein) are only capable of handling up to a few hundred hyperparameters. Recent gradient-based techniques for HO, however, have significantly increased the number of hyperparameters that can be optimized (Domke, 2012; Maclaurin et al., 2015; Pedregosa, 2016; Franceschi et al., 2017) making it possible to handle more hyperparameters than parameters.
As shown in (Franceschi et al., 2018)
, HO and ML can be unified within the natural mathematical framework of bilevel programming, where an outer optimization problem is solved subject to the optimality of an inner optimization problem. The variables of the outer objective are either the hyperparameters of a supervised learning problem in HO or the parameters of a meta-learner in ML. In HO the inner problem is usually the minimization of an empirical loss, while in ML it could concern classifiers for individual tasks. Table1 outlines the links among bilevel programming, HO and ML.
Bilevel programming (Bard, 2013)
has been suggested before in machine learning(Keerthi et al., 2007; Kunapuli et al., 2008; Flamary et al., 2014; Pedregosa, 2016), but never in the context of ML. The resulting framework, outlined in Section 2, encompasses some existing approaches to ML. A technical difficulty arises when the solution to the inner problem cannot be written analytically and one needs to resort to iterative optimization approaches. We outline this approach in Section 2.3 and we briefly discuss conditions that guarantee good approximation properties.
We developed a software package. Far-HO, based on the popular deep learning framework TensorFlow (Abadi et al., 2015) to facilitate the formalization and the solution of problems arising in HO and ML in a unified manner. We present an overview of the package and showcase two possible applications in Section 3.
2 A bilevel optimization framework
We consider bilevel optimization problems (see e.g. Colson et al., 2007) of the form
Specific instances of this problem include HO and ML, which we discuss next. We call the outer objective, interpreted as a meta-train or validation error and, for every , we call the inner objective. is regarded as a class of objective functions parameterized by , wherein each single function may represent a training error, possibly averaged multiple tasks. The cartoon in Figure 1 depicts a stereotypical scenario.
2.1 Hyperparameter Optimization
In the context of hyperparameter optimization, we are interested in minimizing the validation error of a model
parameterized by a vector, with respect to a vector of real-valued hyperparameters . For example, we may consider representation or regularization hyperparameters that control the hypothesis space or penalties, respectively.
In this setting, a prototypical choice for the inner objective is the regularized empirical error
where is a set of input/output points,
is a prescribed loss function, anda regularizer parameterized by .
The outer objective represents a proxy for the generalization error of , and it is given by the average loss on a validation set
Note that the outer objective does not depend explicitly on the hyperparameters , since in HO is instrumental in finding a good model , which is our final goal.
In meta-learning (ML) the inner and outer objectives are computed by respectively summing and averaging the training and the validation error of multiple tasks. The goal is to produce a learning algorithm that will work well on novel tasks333This is also related to multitask learning, except in ML the goal is to extrapolate to novel tasks..
For this purpose, we have available a meta-training set , which is a collection of datasets, sampled from a meta-distribution . Each dataset with is linked to a specific task. Note that the output space is task dependent (e.g. a multi-class classification problem with variable number of classes). The model for each task is a function , identified by a parameter vectors and hyperparameters . A key point here is that is shared between the tasks. With this notation the inner and outer objectives are
respectively, where is the empirical loss of the pair on a set of examples . Particular examples are choosing the model , in which case parameterizes a feature mapping, or consider , in which case represents a common model around which task specific models are to be found.
Note that the inner and outer losses for task use different train/validation splits of the corresponding dataset . Unlike in HO, in ML the final goal is to find a good and the are now instrumental.
2.3 Gradient-Based Approach
We now discuss a general approach to solve Problem (1). In general there is no closed form expression , so it is not possible to directly optimize the outer objective function. A compelling approach is to replace the inner problem with a dynamical system, as discussed in (Domke, 2012; Maclaurin et al., 2015; Franceschi et al., 2017).
Specifically, we let where is a prescribed positive integer and consider the following approximation of Problem (1)
with a smooth initialization mapping and, for every , a smooth mapping that represents the operation performed by the -th step of an optimization algorithm such as gradient descent444Other algorithms such as Adam requires auxiliary variables that need to be included in ., i.e. .
The approximation of the bilevel problem (1) by Procedure (5) raises the issue of the quality of this approximation: may, in general, be unrelated to . Among the possible issues, we note that, for a chosen , may not even be an approximate minimizer of , or, in the presence of multiple minimizers, the optimization dynamics may lead to a minimizer which not necessarily achieves the infimum of . The situation is, however, different if the inner problem admits a unique minimizer for every (e.g. when is strongly convex). In this case, it is possible to show, under some regularity assumptions, that the set of minimizers of converges to that of for (Franceschi et al., 2018) and that the gradient of converges to .
On the other hand, Procedure (5) also suggests to consider the inner dynamics as a form of approximate empirical error minimization which is valid in its own right. From this perspective it is possible (and indeed natural) to include among the components of variables associated with the optimization algorithm itself. For example, in (de Freitas, 2016; Wichrowska et al., 2017) the mapping
is implemented as a recurrent neural network, while(Finn et al., 2017) focus on the initialization mapping by letting .
A major advantage of the reformulation above is that it makes it possible to compute efficiently the gradient of , called hypergradient, either in time or in memory (Maclaurin et al., 2015; Franceschi et al., 2017), by making use of Reverse or Forward mode algorithmic differentiation (Griewank and Walther, 2008). This makes it feasible to efficiently search for a good configuration in a high dimensional hyperparameter space – reverse mode having a complexity in time independent form the size of .
3 Far-HO: A Gradient-Based Hyperparameter Optimization Package
We developed a software package in Python with the aim to facilitate the formalization, implementation and numerical solution of HO and ML problems with continuous hyperparameters (e.g. ridge regression, logistic regression, deep neural networks, …).Far-HO, available at https://github.com/lucfra/FAR-HO, is based on the popular library TensorFlow (Abadi et al., 2015); by leveraging the computational power of modern GPUs, it allows to scale up gradient-based hyperparameter optimization techniques to high dimensional problems (many parameters and/or hyperparameters). Notably, it implements dynamic forward and reverse mode iterative differentiation555 It is not required to “unroll” the computational graph of the optimization dynamics for steps. Unrolling quickly becomes impractical as grows. for Procedure 5 and warm restart strategies for the optimization of the inner problem and the computation of the hypergradient. The package exposes utility functions to declare hyperparameters, instantiate commonly used first-order optimization dynamics such as gradient descent or Adam and features single-line calls to set up approximate bilevel problems. Two practical examples are illustrated in the remainder of this section. Experimental results obtained with Far-HO are reported in (Franceschi et al., 2018).
Hyperparameter Optimization: Weighting The Examples’ Losses
The illustrative example in Figure 1 shows how to optimize hyperparameters that weigh the contribution of each training example to the total loss. This setting is useful when part of the training data is corrupted as in the data hyper-cleaning experiments in (Franceschi et al., 2017). In this example, the inner objective is We treat also the learning rate as an hyperparameter (Lines 7 and 13). As illustrated in Line 15, hyperparameters can be “declared” using the get_hyperparameter method, which mimics TensorFlow’s get_variable, and can be placed anywhere in a computational graph. The method minimize (closely related to minimize
in TensorFlow) accepts two scalar tensors (E and L, defined in Lines 10-11) for the outer and the inner problem, respectively, and two associated optimizers, outer_opt and inner_opt (one of which is directly taken from TensorFlow in this example).
Meta-Learning: Hyper-Representation Networks
Our approach (Franceschi et al., 2018) to meta-learning consists in partitioning the model into a cross-task intermediate hyper-representation mapping (parametrized by a vector ) and task specific models (parametrized by vectors ). The final ground model for task is thus given by ; and are learned by respectively optimizing the outer and the inner objective in (4). This method is inspired by (Baxter, 1995; Caruana, 1998) which instead jointly optimize both hyper-representation and task specific weights.
We instantiate the approximation scheme in Problem (5) in which the weights of the task-specific models can be learned by iterations of gradient descent, resulting in the problem: with . Since, in general, the number of episodes in a meta-training set is large, we compute a stochastic approximation of the gradient of by sampling a mini-batch of episodes. Illustrative code for this example is presented in Figure 2.
- Abadi et al. (2015) Martín Abadi, Ashish Agarwal, and et. al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
- Bard (2013) Jonathan F. Bard. Practical bilevel optimization: algorithms and applications, volume 30. Springer Science & Business Media, 2013. 01251.
Learning internal representations.
Proceedings of the 8th Annual Conference on Computational Learning Theory (COLT), pages 311–320. ACM, 1995.
- Caruana (1998) Rich Caruana. Multitask learning. In Learning to learn, pages 95–133. Springer, 1998. 02683.
- Colson et al. (2007) Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization. Annals of operations research, 153(1):235–256, 2007.
- de Freitas (2016) Nando de Freitas. Learning to Learn and Compositionality with Deep Recurrent Neural Networks: Learning to Learn and Compositionality. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016.
- Domke (2012) Justin Domke. Generic Methods for Optimization-Based Modeling. In AISTATS, volume 22, pages 318–326, 2012. URL http://www.jmlr.org/proceedings/papers/v22/domke12/domke12.pdf.
- Finn et al. (2017) Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, (ICML), pages 1126–1135, 2017. URL http://proceedings.mlr.press/v70/finn17a.html.
Flamary et al. (2014)
Rémi Flamary, Alain Rakotomamonjy, and Gilles Gasso.
Learning constrained task similarities in graph-regularized
Regularization, Optimization, Kernels, and Support Vector Machines, page 103, 2014.
- Franceschi et al. (2017) Luca Franceschi, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. Forward and reverse gradient-based hyperparameter optimization. In Proceedings of the 34th International Conference on Machine Learning, (ICML), pages 1165–1173, 2017. URL http://proceedings.mlr.press/v70/franceschi17a.html.
- Franceschi et al. (2018) Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. Bilevel programming for hyperparameter optimization and meta-learning. In Proceedings of The 35rd International Conference on Machine Learning (ICML), 2018.
- Griewank and Walther (2008) Andreas Griewank and Andrea Walther. Evaluating derivatives: principles and techniques of algorithmic differentiation. SIAM, 2008.
- Hutter et al. (2015) Frank Hutter, Jörg Lücke, and Lars Schmidt-Thieme. Beyond Manual Tuning of Hyperparameters. KI - Künstliche Intelligenz, 29(4):329–337, November 2015. ISSN 0933-1875, 1610-1987. doi: 10.1007/s13218-015-0381-0. URL http://link.springer.com/10.1007/s13218-015-0381-0.
- Keerthi et al. (2007) S Sathiya Keerthi, Vikas Sindhwani, and Olivier Chapelle. An efficient method for gradient-based adaptation of hyperparameters in svm models. In Advances in Neural Information Processing Systems (NIPS), pages 673–680, 2007.
- Kunapuli et al. (2008) G. Kunapuli, K.P. Bennett, Jing Hu, and Jong-Shi Pang. Classification model selection via bilevel programming. Optimization Methods and Software, 23(4):475–489, August 2008. ISSN 1055-6788, 1029-4937. doi: 10.1080/10556780802102586. URL http://www.tandfonline.com/doi/abs/10.1080/10556780802102586.
- Maclaurin et al. (2015) Dougal Maclaurin, David K. Duvenaud, and Ryan P. Adams. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the 32nd International Conference on Machine Learning, (ICML, pages 2113–2122, 2015.
- Pedregosa (2016) Fabian Pedregosa. Hyperparameter optimization with approximate gradient. In Proceedings of The 33rd International Conference on Machine Learning (ICML), pages 737–746, 2016. URL http://proceedings.mlr.press/v48/pedregosa16.html.
- Wichrowska et al. (2017) Olga Wichrowska, Niru Maheswaranathan, Matthew W Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Nando Freitas, and Jascha Sohl-Dickstein. Learned optimizers that scale and generalize. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3751–3760, 2017.