Bayesian Sparsification of Recurrent Neural Networks

07/31/2017
by   Ekaterina Lobacheva, et al.
0

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights. Recently proposed Sparse Variational Dropout eliminates the majority of the weights in a feed-forward neural network without significant loss of quality. We apply this technique to sparsify recurrent neural networks. To account for recurrent specifics we also rely on Binary Variational Dropout for RNN. We report 99.5 sparsity level on sentiment analysis task without a quality drop and up to 87 sparsity level on language modeling task with slight loss of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/16/2015

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Recurrent neural networks (RNNs) stand at the forefront of many recent d...
research
03/16/2016

Recurrent Dropout without Memory Loss

This paper presents a novel approach to recurrent neural network (RNN) r...
research
07/08/2018

Learning The Sequential Temporal Information with Recurrent Neural Networks

Recurrent Networks are one of the most powerful and promising artificial...
research
06/09/2016

MuFuRU: The Multi-Function Recurrent Unit

Recurrent neural networks such as the GRU and LSTM found wide adoption i...
research
12/20/2014

Modeling Compositionality with Multiplicative Recurrent Neural Networks

We present the multiplicative recurrent neural network as a general mode...
research
07/14/2016

Using Recurrent Neural Network for Learning Expressive Ontologies

Recently, Neural Networks have been proven extremely effective in many n...
research
10/31/2017

Fraternal Dropout

Recurrent neural networks (RNNs) are important class of architectures am...

Please sign up or login with your details

Forgot password? Click here to reset