Hyperparameter Transfer Across Developer Adjustments

10/25/2020
by   Danny Stoll, et al.
0

After developer adjustments to a machine learning (ML) algorithm, how can the results of an old hyperparameter optimization (HPO) automatically be used to speedup a new HPO? This question poses a challenging problem, as developer adjustments can change which hyperparameter settings perform well, or even the hyperparameter search space itself. While many approaches exist that leverage knowledge obtained on previous tasks, so far, knowledge from previous development steps remains entirely untapped. In this work, we remedy this situation and propose a new research framework: hyperparameter transfer across adjustments (HT-AA). To lay a solid foundation for this research framework, we provide four simple HT-AA baseline algorithms and eight benchmarks changing various aspects of ML algorithms, their hyperparameter search spaces, and the neural architectures used. The best baseline, on average and depending on the budgets for the old and new HPO, reaches a given performance 1.2–2.6x faster than a prominent HPO algorithm without transfer. As HPO is a crucial step in ML development but requires extensive computational resources, this speedup would lead to faster development cycles, lower costs, and reduced environmental impacts. To make these benefits available to ML developers off-the-shelf and to facilitate future research on HT-AA, we provide python packages for our baselines and benchmarks.

READ FULL TEXT
research
12/02/2020

VisEvol: Visual Analytics to Support Hyperparameter Search through Evolutionary Optimization

During the training phase of machine learning (ML) models, it is usually...
research
02/20/2023

Quantum Machine Learning hyperparameter search

This paper presents a quantum-based Fourier-regression approach for mach...
research
02/08/2023

Two-step hyperparameter optimization method: Accelerating hyperparameter search by using a fraction of a training dataset

Hyperparameter optimization (HPO) can be an important step in machine le...
research
04/24/2019

Reducing The Search Space For Hyperparameter Optimization Using Group Sparsity

We propose a new algorithm for hyperparameter selection in machine learn...
research
09/16/2019

A Tsetlin Machine with Multigranular Clauses

The recently introduced Tsetlin Machine (TM) has provided competitive pa...
research
10/15/2018

Hyperparameter Learning via Distributional Transfer

Bayesian optimisation is a popular technique for hyperparameter learning...
research
06/13/2023

CAMEO: A Causal Transfer Learning Approach for Performance Optimization of Configurable Computer Systems

Modern computer systems are highly-configurable, with hundreds of config...

Please sign up or login with your details

Forgot password? Click here to reset