Exploring Sparsity in Recurrent Neural Networks

04/17/2017
by   Sharan Narang, et al.
0

Recurrent Neural Networks (RNN) are widely used to solve a variety of problems and as the quantity of data and the amount of available compute have increased, so have model sizes. The number of parameters in recent state-of-the-art networks makes them hard to deploy, especially on mobile phones and embedded devices. The challenge is due to both the size of the model and the time it takes to evaluate it. In order to deploy these RNNs efficiently, we propose a technique to reduce the parameters of a network by pruning weights during the initial training of the network. At the end of training, the parameters of the network are sparse while accuracy is still close to the original dense neural network. The network size is reduced by 8x and the time required to train the model remains constant. Additionally, we can prune a larger dense network to achieve better than baseline performance while still reducing the total number of parameters significantly. Pruning RNNs reduces the size of the model and can also help achieve significant inference time speed-up using sparse matrix multiply. Benchmarks show that using our technique model size can be reduced by 90

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2017

Block-Sparse Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are used in state-of-the-art models in ...
research
01/22/2021

Selfish Sparse RNN Training

Sparse neural networks have been widely applied to reduce the necessary ...
research
01/02/2015

An Empirical Study of the L2-Boost technique with Echo State Networks

A particular case of Recurrent Neural Network (RNN) was introduced at th...
research
07/02/2018

weight-importance sparse training in keyword spotting

Large size models are implemented in recently ASR system to deal with co...
research
06/08/2015

Learning both Weights and Connections for Efficient Neural Networks

Neural networks are both computationally intensive and memory intensive,...
research
11/30/2019

One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation

Recent advances in the sparse neural network literature have made it pos...
research
06/03/2019

NodeDrop: A Condition for Reducing Network Size without Effect on Output

Determining an appropriate number of features for each layer in a neural...

Please sign up or login with your details

Forgot password? Click here to reset