Understanding and Preventing Capacity Loss in Reinforcement Learning

04/20/2022
by   Clare Lyle, et al.
0

The reinforcement learning (RL) problem is rife with sources of non-stationarity, making it a notoriously difficult problem domain for the application of neural networks. We identify a mechanism by which non-stationary prediction targets can prevent learning progress in deep RL agents: capacity loss, whereby networks trained on a sequence of target values lose their ability to quickly update their predictions over time. We demonstrate that capacity loss occurs in a range of RL agents and environments, and is particularly damaging to performance in sparse-reward tasks. We then present a simple regularizer, Initial Feature Regularization (InFeR), that mitigates this phenomenon by regressing a subspace of features towards its value at initialization, leading to significant performance improvements in sparse-reward environments such as Montezuma's Revenge. We conclude that preventing capacity loss is crucial to enable agents to maximally benefit from the learning signals they obtain throughout the entire training trajectory.

READ FULL TEXT

page 19

page 20

research
06/10/2020

The Impact of Non-stationarity on Generalisation in Deep Reinforcement Learning

Non-stationarity arises in Reinforcement Learning (RL) even in stationar...
research
03/02/2023

Understanding plasticity in neural networks

Plasticity, the ability of a neural network to quickly change its predic...
research
06/05/2022

Learning Dynamics and Generalization in Reinforcement Learning

Solving a reinforcement learning (RL) problem poses two competing challe...
research
05/26/2023

A Reminder of its Brittleness: Language Reward Shaping May Hinder Learning for Instruction Following Agents

Teaching agents to follow complex written instructions has been an impor...
research
04/13/2022

Local Feature Swapping for Generalization in Reinforcement Learning

Over the past few years, the acceleration of computing resources and res...
research
06/22/2023

Transferable Curricula through Difficulty Conditioned Generators

Advancements in reinforcement learning (RL) have demonstrated superhuman...
research
03/13/2023

Loss of Plasticity in Continual Deep Reinforcement Learning

The ability to learn continually is essential in a complex and changing ...

Please sign up or login with your details

Forgot password? Click here to reset