Hyperparameter Learning via Distributional Transfer

10/15/2018
by   Ho Chung Leon Law, et al.
0

Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial 'exploration' even in cases where potentially similar prior tasks have been solved. We propose to transfer information across tasks using kernel embeddings of distributions of training datasets used in those tasks. The resulting method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.

READ FULL TEXT
research
07/31/2016

Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training

Recently, several optimization methods have been successfully applied to...
research
12/13/2020

Warm Starting CMA-ES for Hyperparameter Optimization

Hyperparameter optimization (HPO), formulated as black-box optimization ...
research
10/20/2021

Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation

Machine learning training methods depend plentifully and intricately on ...
research
06/29/2023

Obeying the Order: Introducing Ordered Transfer Hyperparameter Optimisation

We introduce ordered transfer hyperparameter optimisation (OTHPO), a ver...
research
10/25/2020

Hyperparameter Transfer Across Developer Adjustments

After developer adjustments to a machine learning (ML) algorithm, how ca...
research
12/18/2019

Benchmarking the Neural Linear Model for Regression

The neural linear model is a simple adaptive Bayesian linear regression ...
research
04/02/2019

Easy Transfer Learning By Exploiting Intra-domain Structures

Transfer learning aims at transferring knowledge from a well-labeled dom...

Please sign up or login with your details

Forgot password? Click here to reset