Population Based Training of Neural Networks

11/27/2017
by   Max Jaderberg, et al.
0

Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present Population Based Training (PBT), a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.

READ FULL TEXT
research
09/28/2021

Faster Improvement Rate Population Based Training

The successful training of neural networks typically involves careful an...
research
04/21/2021

Automatic model training under restrictive time constraints

We develop a hyperparameter optimisation algorithm, Automated Budget Con...
research
07/13/2022

Goal-Oriented Sensitivity Analysis of Hyperparameters in Deep Learning

Tackling new machine learning problems with neural networks always means...
research
02/26/2021

On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning

Model-based Reinforcement Learning (MBRL) is a promising framework for l...
research
09/30/2021

Genealogical Population-Based Training for Hyperparameter Optimization

Hyperparameter optimization aims at finding more rapidly and efficiently...
research
06/30/2023

The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks

Do neural networks, trained on well-understood algorithmic tasks, reliab...
research
03/17/2023

Dynamic Update-to-Data Ratio: Minimizing World Model Overfitting

Early stopping based on the validation set performance is a popular appr...

Please sign up or login with your details

Forgot password? Click here to reset