The Dormant Neuron Phenomenon in Deep Reinforcement Learning

02/24/2023
by   Ghada Sokar, et al.
0

In this work we identify the dormant neuron phenomenon in deep reinforcement learning, where an agent's network suffers from an increasing number of inactive neurons, thereby affecting network expressivity. We demonstrate the presence of this phenomenon across a variety of algorithms and environments, and highlight its effect on learning. To address this issue, we propose a simple and effective method (ReDo) that Recycles Dormant neurons throughout training. Our experiments demonstrate that ReDo maintains the expressive power of networks by reducing the number of dormant neurons and results in improved performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2019

Overcoming Catastrophic Forgetting by Neuron-level Plasticity Control

To address the issue of catastrophic forgetting in neural networks, we p...
research
11/07/2016

Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning

Instability and variability of Deep Reinforcement Learning (DRL) algorit...
research
05/07/2015

Optimal Neuron Selection: NK Echo State Networks for Reinforcement Learning

This paper introduces the NK Echo State Network. The problem of learning...
research
02/28/2020

On Catastrophic Interference in Atari 2600 Games

Model-free deep reinforcement learning algorithms are troubled with poor...
research
05/15/2015

Reinforcement Learning applied to Single Neuron

This paper extends the reinforcement learning ideas into the multi-agent...
research
07/22/2023

Emergence of Adaptive Circadian Rhythms in Deep Reinforcement Learning

Adapting to regularities of the environment is critical for biological o...
research
03/03/2020

Contention Window Optimization in IEEE 802.11ax Networks with Deep Reinforcement Learning

The proper setting of contention window (CW) values has a significant im...

Please sign up or login with your details

Forgot password? Click here to reset