DeepAI AI Chat
Log In Sign Up

Dynamic Sparse Training for Deep Reinforcement Learning

by   Ghada Sokar, et al.

Deep reinforcement learning has achieved significant success in many decision-making tasks in various fields. However, it requires a large training time of dense neural networks to obtain a good performance. This hinders its applicability on low-resource devices where memory and computation are strictly constrained. In a step towards enabling deep reinforcement learning agents to be applied to low-resource devices, in this work, we propose for the first time to dynamically train deep reinforcement learning agents with sparse neural networks from scratch. We adopt the evolution principles of dynamic sparse training in the reinforcement learning paradigm and introduce a training algorithm that optimizes the sparse topology and the weight values jointly to dynamically fit the incoming data. Our approach is easy to be integrated into existing deep reinforcement learning algorithms and has many favorable advantages. First, it allows for significant compression of the network size which reduces the memory and computation costs substantially. This would accelerate not only the agent inference but also its training process. Second, it speeds up the agent learning process and allows for reducing the number of required training steps. Third, it can achieve higher performance than training the dense counterpart network. We evaluate our approach on OpenAI gym continuous control tasks. The experimental results show the effectiveness of our approach in achieving higher performance than one of the state-of-art baselines with a 50% reduction in the network size and floating-point operations (FLOPs). Moreover, our proposed approach can reach the same performance achieved by the dense network with a 40-50% reduction in the number of training steps.


page 1

page 2

page 3

page 4


RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch

Training deep reinforcement learning (DRL) models usually requires high ...

GST: Group-Sparse Training for Accelerating Deep Reinforcement Learning

Deep reinforcement learning (DRL) has shown remarkable success in sequen...

AcceRL: Policy Acceleration Framework for Deep Reinforcement Learning

Deep reinforcement learning has achieved great success in various fields...

DNS: Determinantal Point Process Based Neural Network Sampler for Ensemble Reinforcement Learning

Application of ensemble of neural networks is becoming an imminent tool ...

Cell Selection with Deep Reinforcement Learning in Sparse Mobile Crowdsensing

Sparse Mobile CrowdSensing (MCS) is a novel MCS paradigm where data infe...

Scenario-Assisted Deep Reinforcement Learning

Deep reinforcement learning has proven remarkably useful in training age...

Concept Learning through Deep Reinforcement Learning with Memory-Augmented Neural Networks

Deep neural networks have shown superior performance in many regimes to ...

Code Repositories


Implementation for the paper "Dynamic Sparse Training for Deep Reinforcement Learning" in PyTorch.

view repo