Structured Pruning Learns Compact and Accurate Models

04/01/2022
by   Mengzhou Xia, et al.
0

The growing size of neural language models has led to increased attention in model compression. The two predominant approaches are pruning, which gradually removes weights from a pre-trained model, and distillation, which trains a smaller compact model to match a larger one. Pruning methods can significantly reduce the model size but hardly achieve large speedups as distillation. However, distillation methods require large amounts of unlabeled data and are expensive to train. In this work, we propose a task-specific structured pruning method CoFi (Coarse- and Fine-grained Pruning), which delivers highly parallelizable subnetworks and matches the distillation methods in both accuracy and latency, without resorting to any unlabeled data. Our key insight is to jointly prune coarse-grained (e.g., layers) and fine-grained (e.g., heads and hidden units) modules, which controls the pruning decision of each parameter with masks of different granularity. We also devise a layerwise distillation strategy to transfer knowledge from unpruned to pruned models during optimization. Our experiments on GLUE and SQuAD datasets show that CoFi yields models with over 10x speedups with a small accuracy drop, showing its effectiveness and efficiency compared to previous pruning and distillation approaches.

READ FULL TEXT
research
03/07/2023

Gradient-Free Structured Pruning with Unlabeled Data

Large Language Models (LLMs) have achieved great success in solving diff...
research
12/15/2022

Gradient-based Intra-attention Pruning on Pre-trained Language Models

Pre-trained language models achieve superior performance, but they are c...
research
05/25/2022

Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models

Model compression by way of parameter pruning, quantization, or distilla...
research
02/14/2021

Error-driven Pruning of Language Models for Virtual Assistants

Language models (LMs) for virtual assistants (VAs) are typically trained...
research
10/21/2021

Class-Discriminative CNN Compression

Compressing convolutional neural networks (CNNs) by pruning and distilla...
research
09/10/2021

Block Pruning For Faster Transformers

Pre-training has improved model accuracy for both classification and gen...
research
10/15/2021

Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm

Various pruning approaches have been proposed to reduce the footprint re...

Please sign up or login with your details

Forgot password? Click here to reset