Bayesian Compression for Natural Language Processing

10/25/2018
by   Nadezhda Chirkova, et al.
0

In natural language processing, a lot of the tasks are successfully solved with recurrent neural networks, but such models have a huge number of parameters. The majority of these parameters are often concentrated in the embedding layer, which size grows proportionally to the vocabulary length. We propose a Bayesian sparsification technique for RNNs which allows compressing the RNN dozens or hundreds of times without time-consuming hyperparameters tuning. We also generalize the model for vocabulary sparsification to filter out unnecessary words and compress the RNN even further. We show that the choice of the kept words is interpretable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/01/2019

Visualizing RNN States with Predictive Semantic Encodings

Recurrent Neural Networks are an effective and prevalent tool used to mo...
research
08/21/2019

Restricted Recurrent Neural Networks

Recurrent Neural Network (RNN) and its variations such as Long Short-Ter...
research
10/31/2016

LightRNN: Memory and Computation-Efficient Recurrent Neural Networks

Recurrent neural networks (RNNs) have achieved state-of-the-art performa...
research
06/21/2018

Learning K-way D-dimensional Discrete Codes for Compact Embedding Representations

Conventional embedding methods directly associate each symbol with a con...
research
04/24/2023

Semantic Tokenizer for Enhanced Natural Language Processing

Traditionally, NLP performance improvement has been focused on improving...
research
01/22/2019

Deep learning and sub-word-unit approach in written art generation

Automatic poetry generation is novel and interesting application of natu...

Please sign up or login with your details

Forgot password? Click here to reset