Dynamic Sparse Training for Deep Reinforcement Learning

06/08/2021
by   Ghada Sokar, et al.
39

Deep reinforcement learning has achieved significant success in many decision-making tasks in various fields. However, it requires a large training time of dense neural networks to obtain a good performance. This hinders its applicability on low-resource devices where memory and computation are strictly constrained. In a step towards enabling deep reinforcement learning agents to be applied to low-resource devices, in this work, we propose for the first time to dynamically train deep reinforcement learning agents with sparse neural networks from scratch. We adopt the evolution principles of dynamic sparse training in the reinforcement learning paradigm and introduce a training algorithm that optimizes the sparse topology and the weight values jointly to dynamically fit the incoming data. Our approach is easy to be integrated into existing deep reinforcement learning algorithms and has many favorable advantages. First, it allows for significant compression of the network size which reduces the memory and computation costs substantially. This would accelerate not only the agent inference but also its training process. Second, it speeds up the agent learning process and allows for reducing the number of required training steps. Third, it can achieve higher performance than training the dense counterpart network. We evaluate our approach on OpenAI gym continuous control tasks. The experimental results show the effectiveness of our approach in achieving higher performance than one of the state-of-art baselines with a 50% reduction in the network size and floating-point operations (FLOPs). Moreover, our proposed approach can reach the same performance achieved by the dense network with a 40-50% reduction in the number of training steps.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/30/2022

RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch

Training deep reinforcement learning (DRL) models usually requires high ...
01/24/2021

GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning

Deep reinforcement learning (DRL) has shown remarkable success in sequen...
01/31/2022

DNS: Determinantal Point Process Based Neural Network Sampler for Ensemble Reinforcement Learning

Application of ensemble of neural networks is becoming an imminent tool ...
04/19/2018

Cell Selection with Deep Reinforcement Learning in Sparse Mobile Crowdsensing

Sparse Mobile CrowdSensing (MCS) is a novel MCS paradigm where data infe...
03/16/2020

Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks

Reinforcement learning systems require good representations to work well...
02/09/2022

Scenario-Assisted Deep Reinforcement Learning

Deep reinforcement learning has proven remarkably useful in training age...
11/15/2018

Concept Learning through Deep Reinforcement Learning with Memory-Augmented Neural Networks

Deep neural networks have shown superior performance in many regimes to ...

Code Repositories

Dynamic-Sparse-Training-for-Deep-Reinforcement-Learning

Implementation for the paper "Dynamic Sparse Training for Deep Reinforcement Learning" in PyTorch.


view repo