Use of static surrogates in hyperparameter optimization

03/14/2021
by   Dounia Lakhmiri, et al.
0

Optimizing the hyperparameters and architecture of a neural network is a long yet necessary phase in the development of any new application. This consuming process can benefit from the elaboration of strategies designed to quickly discard low quality configurations and focus on more promising candidates. This work aims at enhancing HyperNOMAD, a library that adapts a direct search derivative-free optimization algorithm to tune both the architecture and the training of a neural network simultaneously, by targeting two keys steps of its execution and exploiting cheap approximations in the form of static surrogates to trigger the early stopping of the evaluation of a configuration and the ranking of pools of candidates. These additions to HyperNOMAD are shown to improve on its resources consumption without harming the quality of the proposed solutions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2019

HyperNOMAD: Hyperparameter optimization of deep neural networks using mesh adaptive direct search

The performance of deep neural networks is highly sensitive to the choic...
research
08/04/2022

ACE: Adaptive Constraint-aware Early Stopping in Hyperparameter Optimization

Deploying machine learning models requires high model quality and needs ...
research
02/15/2021

Online hyperparameter optimization by real-time recurrent learning

Conventional hyperparameter optimization methods are computationally int...
research
05/30/2017

Accelerating Neural Architecture Search using Performance Prediction

Methods for neural network hyperparameter optimization and meta-modeling...
research
11/05/2018

Deep Genetic Network

Optimizing a neural network's performance is a tedious and time taking p...
research
06/05/2020

Learning to Rank Learning Curves

Many automated machine learning methods, such as those for hyperparamete...

Please sign up or login with your details

Forgot password? Click here to reset