A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization Method

08/25/2022
by   Ankur Sinha, et al.
0

Hyperparameter optimization in machine learning is often achieved using naive techniques that only lead to an approximate set of hyperparameters. Although techniques such as Bayesian optimization perform an intelligent search on a given domain of hyperparameters, it does not guarantee an optimal solution. A major drawback of most of these approaches is an exponential increase of their search domain with number of hyperparameters, increasing the computational cost and making the approaches slow. The hyperparameter optimization problem is inherently a bilevel optimization task, and some studies have attempted bilevel solution methodologies for solving this problem. However, these studies assume a unique set of model weights that minimize the training loss, which is generally violated by deep learning architectures. This paper discusses a gradient-based bilevel method addressing these drawbacks for solving the hyperparameter optimization problem. The proposed method can handle continuous hyperparameters for which we have chosen the regularization hyperparameter in our experiments. The method guarantees convergence to the set of optimal hyperparameters that this study has theoretically proven. The idea is based on approximating the lower-level optimal value function using Gaussian process regression. As a result, the bilevel problem is reduced to a single level constrained optimization task that is solved using the augmented Lagrangian method. We have performed an extensive computational study on the MNIST and CIFAR-10 datasets on multi-layer perceptron and LeNet architectures that confirms the efficiency of the proposed method. A comparative study against grid search, random search, Bayesian optimization, and HyberBand method on various hyperparameter problems shows that the proposed algorithm converges with lower computation and leads to models that generalize better on the testing set.

READ FULL TEXT
research
07/21/2020

A Gradient-based Bilevel Optimization Approach for Tuning Hyperparameters in Machine Learning

Hyperparameter tuning is an active area of research in machine learning,...
research
10/20/2021

Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation

Machine learning training methods depend plentifully and intricately on ...
research
08/07/2023

HomOpt: A Homotopy-Based Hyperparameter Optimization Method

Machine learning has achieved remarkable success over the past couple of...
research
03/29/2018

An LP-based hyperparameter optimization model for language modeling

In order to find hyperparameters for a machine learning model, algorithm...
research
04/03/2020

Weighted Random Search for Hyperparameter Optimization

We introduce an improved version of Random Search (RS), used here for hy...
research
02/15/2020

Multi-Task Multicriteria Hyperparameter Optimization

We present a new method for searching optimal hyperparameters among seve...

Please sign up or login with your details

Forgot password? Click here to reset