Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training

07/31/2016
by   Ilija Ilievski, et al.
0

Recently, several optimization methods have been successfully applied to the hyperparameter optimization of deep neural networks (DNNs). The methods work by modeling the joint distribution of hyperparameter values and corresponding error. Those methods become less practical when applied to modern DNNs whose training may take a few days and thus one cannot collect sufficient observations to accurately model the distribution. To address this challenging issue, we propose a method that learns to transfer optimal hyperparameter values for a small source dataset to hyperparameter values with comparable performance on a dataset of interest. As opposed to existing transfer learning methods, our proposed method does not use hand-designed features. Instead, it uses surrogates to model the hyperparameter-error distributions of the two datasets and trains a neural network to learn the transfer function. Extensive experiments on three CV benchmark datasets clearly demonstrate the efficiency of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2017

All-Transfer Learning for Deep Neural Networks and its Application to Sepsis Classification

In this article, we propose a transfer learning method for deep neural n...
research
06/14/2023

Iterative self-transfer learning: A general methodology for response time-history prediction based on small dataset

There are numerous advantages of deep neural network surrogate modeling ...
research
10/15/2018

Hyperparameter Learning via Distributional Transfer

Bayesian optimisation is a popular technique for hyperparameter learning...
research
01/02/2019

Multi-level CNN for lung nodule classification with Gaussian Process assisted hyperparameter optimization

This paper investigates lung nodule classification by using deep neural ...
research
10/15/2021

Improving Hyperparameter Optimization by Planning Ahead

Hyperparameter optimization (HPO) is generally treated as a bi-level opt...
research
09/09/2019

Training Deep Neural Networks by optimizing over nonlocal paths in hyperparameter space

Hyperparameter optimization is both a practical issue and an interesting...
research
01/26/2019

A Practical Bandit Method with Advantages in Neural Network Tuning

Stochastic bandit algorithms can be used for challenging non-convex opti...

Please sign up or login with your details

Forgot password? Click here to reset