-
A Dissection of Overfitting and Generalization in Continuous Reinforcement Learning
The risks and perils of overfitting in machine learning are well known. ...
read it
-
Group Equivariant Deep Reinforcement Learning
In Reinforcement Learning (RL), Convolutional Neural Networks(CNNs) have...
read it
-
An Empirical Study on Hyperparameters and their Interdependence for RL Generalization
Recent results in Reinforcement Learning (RL) have shown that agents wit...
read it
-
Training Larger Networks for Deep Reinforcement Learning
The success of deep learning in the computer vision and natural language...
read it
-
How to Make Deep RL Work in Practice
In recent years, challenging control problems became solvable with deep ...
read it
-
Diagnosing Bottlenecks in Deep Q-learning Algorithms
Q-learning methods represent a commonly used class of algorithms in rein...
read it
-
Unlocking Pixels for Reinforcement Learning via Implicit Attention
There has recently been significant interest in training reinforcement l...
read it
A Study on Overfitting in Deep Reinforcement Learning
Recent years have witnessed significant progresses in deep Reinforcement Learning (RL). Empowered with large scale neural networks, carefully designed architectures, novel training algorithms and massively parallel computing devices, researchers are able to attack many challenging RL problems. However, in machine learning, more training power comes with a potential risk of more overfitting. As deep RL techniques are being applied to critical problems such as healthcare and finance, it is important to understand the generalization behaviors of the trained agents. In this paper, we conduct a systematic study of standard RL agents and find that they could overfit in various ways. Moreover, overfitting could happen "robustly": commonly used techniques in RL that add stochasticity do not necessarily prevent or detect overfitting. In particular, the same agents and learning algorithms could have drastically different test performance, even when all of them achieve optimal rewards during training. The observations call for more principled and careful evaluation protocols in RL. We conclude with a general discussion on overfitting in RL and a study of the generalization behaviors from the perspective of inductive bias.
READ FULL TEXT
Comments
There are no comments yet.