Fast Efficient Hyperparameter Tuning for Policy Gradients

02/18/2019
by   Supratik Paul, et al.
20

The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that learn optimal schedules for hyperparameters instead of fixed settings can yield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free meta-learning algorithm that can automatically learn an optimal schedule for hyperparameters that affect the policy update directly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective, yielding a sample and computationally efficient algorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/03/2021

Episodic Policy Gradient Training

We introduce a novel training procedure for policy gradient methods wher...
research
09/28/2022

SoftTreeMax: Policy Gradient with Tree Search

Policy-gradient methods are widely used for learning control policies. T...
research
03/09/2023

A Framework for History-Aware Hyperparameter Optimisation in Reinforcement Learning

A Reinforcement Learning (RL) system depends on a set of initial conditi...
research
12/02/2018

Automatic hyperparameter selection in Autodock

Autodock is a widely used molecular modeling tool which predicts how sma...
research
09/30/2021

Genealogical Population-Based Training for Hyperparameter Optimization

Hyperparameter optimization aims at finding more rapidly and efficiently...
research
09/28/2021

Faster Improvement Rate Population Based Training

The successful training of neural networks typically involves careful an...
research
05/24/2021

Guided Hyperparameter Tuning Through Visualization and Inference

For deep learning practitioners, hyperparameter tuning for optimizing mo...

Please sign up or login with your details

Forgot password? Click here to reset