ROSITA: Refined BERT cOmpreSsion with InTegrAted techniques

03/21/2021
by   Yuanxin Liu, et al.
0

Pre-trained language models of the BERT family have defined the state-of-the-arts in a wide range of NLP tasks. However, the performance of BERT-based models is mainly driven by the enormous amount of parameters, which hinders their application to resource-limited scenarios. Faced with this problem, recent studies have been attempting to compress BERT into a small-scale model. However, most previous work primarily focuses on a single kind of compression technique, and few attention has been paid to the combination of different methods. When BERT is compressed with integrated techniques, a critical question is how to design the entire compression framework to obtain the optimal performance. In response to this question, we integrate three kinds of compression methods (weight pruning, low-rank factorization and knowledge distillation (KD)) and explore a range of designs concerning model architecture, KD strategy, pruning frequency and learning rate schedule. We find that a careful choice of the designs is crucial to the performance of the compressed model. Based on the empirical findings, our best compressed model, dubbed Refined BERT cOmpreSsion with InTegrAted techniques (ROSITA), is 7.5 × smaller than BERT while maintains 98.5% of the performance on five tasks of the GLUE benchmark, outperforming the previous BERT compression methods with similar parameter budget. The code is available at https://github.com/llyx97/Rosita.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/13/2020

AdaBERT: Task-Adaptive BERT Compression with Differentiable Neural Architecture Search

Large pre-trained language models such as BERT have shown their effectiv...
research
10/16/2021

What do Compressed Large Language Models Forget? Robustness Challenges in Model Compression

Recent works have focused on compressing pre-trained language models (PL...
research
09/07/2021

Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression

Recent studies on compression of pretrained language models (e.g., BERT)...
research
01/26/2022

Auto-Compressing Subset Pruning for Semantic Image Segmentation

State-of-the-art semantic segmentation models are characterized by high ...
research
03/30/2023

oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes

In this paper, we introduce the range of oBERTa language models, an easy...
research
09/15/2021

EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation

Pre-trained language models have shown remarkable results on various NLP...
research
05/31/2021

Greedy Layer Pruning: Decreasing Inference Time of Transformer Models

Fine-tuning transformer models after unsupervised pre-training reaches a...

Please sign up or login with your details

Forgot password? Click here to reset