Tensorized Embedding Layers for Efficient Model Compression

01/30/2019
by   Valentin Khrulkov, et al.
0

The embedding layers transforming input words into real vectors are the key components of deep neural networks used in natural language processing. However, when the vocabulary is large (e.g., 800k unique words in the One-Billion-Word dataset), the corresponding weight matrices can be enormous, which precludes their deployment in a limited resource setting. We introduce a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance. Importantly, our method does not take the pre-trained model and compress its weights but rather supplants the standard embedding layers with their TT-based counterparts. The resulting model is then trained end-to-end, however, it can capitalize on larger batches due to the reduced memory requirements. We evaluate our method on a wide range of benchmarks in sentiment analysis, neural machine translation, and language modeling, and analyze the trade-off between performance and compression ratios for a wide range of architectures, from MLPs to LSTMs and Transformers.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/25/2019

hauWE: Hausa Words Embedding for Natural Language Processing

Words embedding (distributed word vector representations) have become an...
research
10/10/2020

Block-term Tensor Neural Networks

Deep neural networks (DNNs) have achieved outstanding performance in a w...
research
10/02/2019

Distilled embedding: non-linear embedding factorization using knowledge distillation

Word-embeddings are a vital component of Natural Language Processing (NL...
research
10/31/2016

LightRNN: Memory and Computation-Efficient Recurrent Neural Networks

Recurrent neural networks (RNNs) have achieved state-of-the-art performa...
research
05/22/2022

What Do Compressed Multilingual Machine Translation Models Forget?

Recently, very large pre-trained models achieve state-of-the-art results...
research
01/30/2023

Equivariant Architectures for Learning in Deep Weight Spaces

Designing machine learning architectures for processing neural networks ...
research
01/11/2020

Embedding Compression with Isotropic Iterative Quantization

Continuous representation of words is a standard component in deep learn...

Please sign up or login with your details

Forgot password? Click here to reset