DC and SA: Robust and Efficient Hyperparameter Optimization of Multi-subnetwork Deep Learning Models

02/24/2022
by   Alex H. Treacher, et al.
0

We present two novel hyperparameter optimization strategies for optimization of deep learning models with a modular architecture constructed of multiple subnetworks. As complex networks with multiple subnetworks become more frequently applied in machine learning, hyperparameter optimization methods are required to efficiently optimize their hyperparameters. Existing hyperparameter searches are general, and can be used to optimize such networks, however, by exploiting the multi-subnetwork architecture, these searches can be sped up substantially. The proposed methods offer faster convergence to a better-performing final model. To demonstrate this, we propose 2 independent approaches to enhance these prior algorithms: 1) a divide-and-conquer approach, in which the best subnetworks of top-performing models are combined, allowing for more rapid sampling of the hyperparameter search space. 2) A subnetwork adaptive approach that distributes computational resources based on the importance of each subnetwork, allowing more intelligent resource allocation. These approaches can be flexibily applied to many hyperparameter optimization algorithms. To illustrate this, we combine our approaches with the commonly-used Bayesian optimization method. Our approaches are then tested against both synthetic examples and real-world examples and applied to multiple network types including convolutional neural networks and dense feed forward neural networks. Our approaches show an increased optimization efficiency of up to 23.62x, and a final performance boost of up to 3.5 classification and 4.4 MSE for regression, when compared to comparable BO approach.

READ FULL TEXT

page 3

page 4

page 5

research
07/04/2018

BOHB: Robust and Efficient Hyperparameter Optimization at Scale

Modern deep learning methods are very sensitive to many hyperparameters,...
research
05/18/2022

Hyperparameter Optimization with Neural Network Pruning

Since the deep learning model is highly dependent on hyperparameters, hy...
research
11/06/2018

Fast Hyperparameter Optimization of Deep Neural Networks via Ensembling Multiple Surrogates

The performance of deep neural networks crucially depends on good hyperp...
research
05/26/2021

On the Exploitation of Neuroevolutionary Information: Analyzing the Past for a More Efficient Future

Neuroevolutionary algorithms, automatic searches of neural network struc...
research
07/06/2022

Model Agnostic Conformal Hyperparameter Optimization

Several novel frameworks for hyperparameter search have emerged in the l...
research
10/10/2022

Multi-step Planning for Automated Hyperparameter Optimization with OptFormer

As machine learning permeates more industries and models become more exp...
research
07/20/2020

Multi-level Training and Bayesian Optimization for Economical Hyperparameter Optimization

Hyperparameters play a critical role in the performances of many machine...

Please sign up or login with your details

Forgot password? Click here to reset