Neural Network Optimization for Reinforcement Learning Tasks Using Sparse Computations

01/07/2022
by   Dmitry Ivanov, et al.
0

This article proposes a sparse computation-based method for optimizing neural networks for reinforcement learning (RL) tasks. This method combines two ideas: neural network pruning and taking into account input data correlations; it makes it possible to update neuron states only when changes in them exceed a certain threshold. It significantly reduces the number of multiplications when running neural networks. We tested different RL tasks and achieved 20-150x reduction in the number of multiplications. There were no substantial performance losses; sometimes the performance even improved.

READ FULL TEXT
research
09/25/2022

On the Opportunities and Challenges of using Animals Videos in Reinforcement Learning

We investigate the use of animals videos to improve efficiency and perfo...
research
12/31/2021

Single-Shot Pruning for Offline Reinforcement Learning

Deep Reinforcement Learning (RL) is a powerful framework for solving com...
research
09/14/2022

Learning state correspondence of reinforcement learning tasks for knowledge transfer

Deep reinforcement learning has shown an ability to achieve super-human ...
research
10/16/2021

Neural Network Pruning Through Constrained Reinforcement Learning

Network pruning reduces the size of neural networks by removing (pruning...
research
08/19/2019

Mitigating Multi-Stage Cascading Failure by Reinforcement Learning

This paper proposes a cascading failure mitigation strategy based on Rei...
research
07/16/2021

Boosting the Convergence of Reinforcement Learning-based Auto-pruning Using Historical Data

Recently, neural network compression schemes like channel pruning have b...
research
09/05/2017

Knowledge Sharing for Reinforcement Learning: Writing a BOOK

This paper proposes a novel deep reinforcement learning (RL) method inte...

Please sign up or login with your details

Forgot password? Click here to reset