A Practical Bandit Method with Advantages in Neural Network Tuning

01/26/2019
by   Tianyu Wang, et al.
0

Stochastic bandit algorithms can be used for challenging non-convex optimization problems. Hyperparameter tuning of neural networks is particularly challenging, necessitating new approaches. To this end, we present a method that adaptively partitions the combined space of hyperparameters, context, and training resources (e.g., total number of training iterations). By adaptively partitioning the space, the algorithm is able to focus on the portions of the hyperparameter search space that are most relevant in a practical way. By including the resources in the combined space, the method tends to use fewer training resources overall. Our experiments show that this method can surpass state-of-the-art methods in tuning neural networks on benchmark datasets. In some cases, our implementations can achieve the same levels of accuracy on benchmark datasets as existing state-of-the-art approaches while saving over 50

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/21/2016

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization

Performance of machine learning algorithms depends critically on identif...
research
09/25/2021

L^2NAS: Learning to Optimize Neural Architectures via Continuous-Action Reinforcement Learning

Neural architecture search (NAS) has achieved remarkable results in deep...
research
10/19/2020

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

Many optimizers have been proposed for training deep neural networks, an...
research
04/28/2023

Hyperparameter Optimization through Neural Network Partitioning

Well-tuned hyperparameters are crucial for obtaining good generalization...
research
07/31/2016

Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training

Recently, several optimization methods have been successfully applied to...
research
02/03/2023

A Lipschitz Bandits Approach for Continuous Hyperparameter Optimization

One of the most critical problems in machine learning is HyperParameter ...
research
10/26/2020

Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians

Hyperparameter optimization of neural networks can be elegantly formulat...

Please sign up or login with your details

Forgot password? Click here to reset