DeepAI AI Chat
Log In Sign Up

Bayesian Optimization for Iterative Learning

by   Vu Nguyen, et al.

The success of deep (reinforcement) learning systems crucially depends on the correct choice of hyperparameters which are notoriously sensitive and expensive to evaluate. Training these systems typically requires running iterative processes over multiple epochs or episodes. Traditional approaches only consider final performances of a hyperparameter although intermediate information from the learning curve is readily available. In this paper, we present a Bayesian optimization approach which exploits the iterative structure of learning algorithms for efficient hyperparameter tuning. First, we transform each training curve into a numeric score. Second, we selectively augment the data using the auxiliary information from the curve. This augmentation step enables modeling efficiency while preventing the ill-conditioned issue of Gaussian process covariance matrix happened when adding the whole curve. We demonstrate the efficiency of our algorithm by tuning hyperparameters for the training of deep reinforcement learning agents and convolutional neural networks. Our algorithm outperforms all existing baselines in identifying optimal hyperparameters in minimal time.


page 4

page 6

page 8

page 12

page 13


Bayesian Optimization Using Monotonicity Information and Its Application in Machine Learning Hyperparameter

We propose an algorithm for a family of optimization problems where the ...

Efficient Hyperparameter Optimization for Physics-based Character Animation

Physics-based character animation has seen significant advances in recen...

Fast Hyperparameter Tuning using Bayesian Optimization with Directional Derivatives

In this paper we develop a Bayesian optimization based hyperparameter tu...

Hyperparameter Optimization: A Spectral Approach

We give a simple, fast algorithm for hyperparameter optimization inspire...

Quantity vs. Quality: On Hyperparameter Optimization for Deep Reinforcement Learning

Reinforcement learning algorithms can show strong variation in performan...

Metaoptimization on a Distributed System for Deep Reinforcement Learning

Training intelligent agents through reinforcement learning is a notoriou...

DEEP-BO for Hyperparameter Optimization of Deep Networks

The performance of deep neural networks (DNN) is very sensitive to the p...