Bayesian Sparsification of Gated Recurrent Neural Networks

12/12/2018
by   Ekaterina Lobacheva, et al.
0

Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. We apply and further develop this approach for gated recurrent architectures. Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM. It makes some gates and information flow components constant, speeds up forward pass and improves compression. Moreover, the resulting structure of gate sparsity is interpretable and depends on the task. Code is available on github: https://github.com/tipt0p/SparseBayesianRNN

READ FULL TEXT
research
11/13/2019

Structured Sparsification of Gated Recurrent Neural Networks

Recently, a lot of techniques were developed to sparsify the weights of ...
research
06/05/2021

Convolutional Neural Networks with Gated Recurrent Connections

The convolutional neural network (CNN) has become a basic model for solv...
research
02/26/2020

Refined Gate: A Simple and Effective Gating Mechanism for Recurrent Units

Recurrent neural network (RNN) has been widely studied in sequence learn...
research
10/24/2019

A Bayesian Approach to Recurrence in Neural Networks

We begin by reiterating that common neural network activation functions ...
research
12/24/2021

Self-Gated Memory Recurrent Network for Efficient Scalable HDR Deghosting

We propose a novel recurrent network-based HDR deghosting method for fus...
research
10/06/2017

Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency for Sequence Modeling

Recurrent neural networks have shown remarkable success in modeling sequ...
research
06/13/2022

Biologically Inspired Neural Path Finding

The human brain can be considered to be a graphical structure comprising...

Please sign up or login with your details

Forgot password? Click here to reset