Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck

by   Maximilian Igl, et al.

The ability for policies to generalize to new environments is key to the broad application of RL agents. A promising approach to prevent an agent's policy from overfitting to a limited set of training environments is to apply regularization techniques originally developed for supervised learning. However, there are stark differences between supervised learning and RL. We discuss those differences and propose modifications to existing regularization techniques in order to better adapt them to RL. In particular, we focus on regularization techniques relying on the injection of noise into the learned function, a family that includes some of the most widely used approaches such as Dropout and Batch Normalization. To adapt them to RL, we propose Selective Noise Injection (SNI), which maintains the regularizing effect the injected noise has, while mitigating the adverse effects it has on the gradient quality. Furthermore, we demonstrate that the Information Bottleneck (IB) is a particularly well suited regularization technique for RL as it is effective in the low-data regime encountered early on in training RL agents. Combining the IB with SNI, we significantly outperform current state of the art results, including on the recently proposed generalization benchmark Coinrun.


Quantifying Generalization in Reinforcement Learning

In this paper, we investigate the problem of overfitting in deep reinfor...

Generalization and Regularization in DQN

Deep reinforcement learning (RL) algorithms have shown an impressive abi...

Dynamics Generalization via Information Bottleneck in Deep Reinforcement Learning

Despite the significant progress of deep reinforcement learning (RL) in ...

Local Feature Swapping for Generalization in Reinforcement Learning

Over the past few years, the acceleration of computing resources and res...

Simple Noisy Environment Augmentation for Reinforcement Learning

Data augmentation is a widely used technique for improving model perform...

An Empirical Study on Hyperparameters and their Interdependence for RL Generalization

Recent results in Reinforcement Learning (RL) have shown that agents wit...

Adaptive Noise Injection: A Structure-Expanding Regularization for RNN

The vanilla LSTM has become one of the most potential architectures in w...

Please sign up or login with your details

Forgot password? Click here to reset