Compressing Language Models using Doped Kronecker Products

01/24/2020
by   Urmish Thakker, et al.
0

Kronecker Products (KP) have been used to compress IoT RNN Applications by 15-38x compression factors, achieving better results than traditional compression methods. However when KP is applied to large Natural Language Processing tasks, it leads to significant accuracy loss (approx 26 paper proposes a way to recover accuracy otherwise lost when applying KP to large NLP tasks, by allowing additional degrees of freedom in the KP matrix. More formally, we propose doping, a process of adding an extremely sparse overlay matrix on top of the pre-defined KP structure. We call this compression method doped kronecker product compression. To train these models, we present a new solution to the phenomenon of co-matrix adaption (CMA), which uses a new regularization scheme called co matrix dropout regularization (CMR). We present experimental results that demonstrate compression of a large language model with LSTM layers of size 25 MB by 25x with 1.4 25x compression, an equivalent pruned network leads to 7.9 score, while HMD and LMF lead to 15 respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2021

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices

Structured matrices, such as those derived from Kronecker products (KP),...
research
06/11/2019

Lightweight and Efficient Neural Natural Language Processing with Quaternion Networks

Many state-of-the-art neural models for NLP are heavily parameterized an...
research
08/28/2021

DKM: Differentiable K-Means Clustering Layer for Neural Network Compression

Deep neural network (DNN) model compression for efficient on-device infe...
research
02/08/2023

Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models

Recent transformer language models achieve outstanding results in many n...
research
04/20/2018

Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

Many efforts have been made to facilitate natural language processing ta...
research
08/27/2019

On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model Compression

Despite their ubiquity in NLP tasks, Long Short-Term Memory (LSTM) netwo...
research
03/02/2021

Task-parallel in-situ temporal compression of large-scale computational fluid dynamics data

Present day computational fluid dynamics simulations generate extremely ...

Please sign up or login with your details

Forgot password? Click here to reset