Obeying the Order: Introducing Ordered Transfer Hyperparameter Optimisation

06/29/2023
by   Sigrid Passano Hellan, et al.
0

We introduce ordered transfer hyperparameter optimisation (OTHPO), a version of transfer learning for hyperparameter optimisation (HPO) where the tasks follow a sequential order. Unlike for state-of-the-art transfer HPO, the assumption is that each task is most correlated to those immediately before it. This matches many deployed settings, where hyperparameters are retuned as more data is collected; for instance tuning a sequence of movie recommendation systems as more movies and ratings are added. We propose a formal definition, outline the differences to related problems and propose a basic OTHPO method that outperforms state-of-the-art transfer HPO. We empirically show the importance of taking order into account using ten benchmarks. The benchmarks are in the setting of gradually accumulating data, and span XGBoost, random forest, approximate k-nearest neighbor, elastic net, support vector machines and a separate real-world motivated optimisation problem. We open source the benchmarks to foster future research on ordered transfer HPO.

READ FULL TEXT

page 5

page 8

page 21

page 22

page 25

page 28

research
06/12/2017

Practical Gauss-Newton Optimisation for Deep Learning

We present an efficient block-diagonal ap- proximation to the Gauss-Newt...
research
05/27/2021

Bayesian Optimisation for Constrained Problems

Many real-world optimisation problems such as hyperparameter tuning in m...
research
09/10/2020

IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning

Hyperparameter optimisation is a crucial process in searching the optima...
research
10/15/2018

Hyperparameter Learning via Distributional Transfer

Bayesian optimisation is a popular technique for hyperparameter learning...
research
02/25/2021

Hyperparameter Transfer Learning with Adaptive Complexity

Bayesian optimization (BO) is a sample efficient approach to automatical...
research
10/20/2021

Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation

Machine learning training methods depend plentifully and intricately on ...

Please sign up or login with your details

Forgot password? Click here to reset