Dynamic Update-to-Data Ratio: Minimizing World Model Overfitting

03/17/2023
by   Nicolai Dorka, et al.
0

Early stopping based on the validation set performance is a popular approach to find the right balance between under- and overfitting in the context of supervised learning. However, in reinforcement learning, even for supervised sub-problems such as world model learning, early stopping is not applicable as the dataset is continually evolving. As a solution, we propose a new general method that dynamically adjusts the update to data (UTD) ratio during training based on under- and overfitting detection on a small subset of the continuously collected experience not used for training. We apply our method to DreamerV2, a state-of-the-art model-based reinforcement learning algorithm, and evaluate it on the DeepMind Control Suite and the Atari 100k benchmark. The results demonstrate that one can better balance under- and overestimation by adjusting the UTD ratio with our approach compared to the default setting in DreamerV2 and that it is competitive with an extensive hyperparameter search which is not feasible for many applications. Our method eliminates the need to set the UTD hyperparameter by hand and even leads to a higher robustness with regard to other learning-related hyperparameters further reducing the amount of necessary tuning.

READ FULL TEXT

page 14

page 17

research
04/16/2021

Overfitting in Bayesian Optimization: an empirical study and early-stopping solution

Bayesian Optimization (BO) is a successful methodology to tune the hyper...
research
02/26/2021

On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning

Model-based Reinforcement Learning (MBRL) is a promising framework for l...
research
05/30/2017

Accelerating Neural Architecture Search using Performance Prediction

Methods for neural network hyperparameter optimization and meta-modeling...
research
08/19/2022

Intersection of Parallels as an Early Stopping Criterion

A common way to avoid overfitting in supervised learning is early stoppi...
research
08/04/2022

ACE: Adaptive Constraint-aware Early Stopping in Hyperparameter Optimization

Deploying machine learning models requires high model quality and needs ...
research
11/27/2017

Population Based Training of Neural Networks

Neural networks dominate the modern machine learning landscape, but thei...
research
09/30/2021

Genealogical Population-Based Training for Hyperparameter Optimization

Hyperparameter optimization aims at finding more rapidly and efficiently...

Please sign up or login with your details

Forgot password? Click here to reset