From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression

12/14/2021
by   Runxin Xu, et al.
0

Pre-trained Language Models (PLMs) have achieved great success in various Natural Language Processing (NLP) tasks under the pre-training and fine-tuning paradigm. With large quantities of parameters, PLMs are computation-intensive and resource-hungry. Hence, model pruning has been introduced to compress large-scale PLMs. However, most prior approaches only consider task-specific knowledge towards downstream tasks, but ignore the essential task-agnostic knowledge during pruning, which may cause catastrophic forgetting problem and lead to poor generalization ability. To maintain both task-agnostic and task-specific knowledge in our pruned model, we propose ContrAstive Pruning (CAP) under the paradigm of pre-training and fine-tuning. It is designed as a general framework, compatible with both structured and unstructured pruning. Unified in contrastive learning, CAP enables the pruned model to learn from the pre-trained model for task-agnostic knowledge, and fine-tuned model for task-specific knowledge. Besides, to better retain the performance of the pruned model, the snapshots (i.e., the intermediate models at each pruning iteration) also serve as effective supervisions for pruning. Our extensive experiments show that adopting CAP consistently yields significant improvements, especially in extremely high sparsity scenarios. With only 3 model parameters reserved (i.e., 97 and 96.3 our probing experiments demonstrate that the model pruned by CAP tends to achieve better generalization ability.

READ FULL TEXT
research
03/18/2023

SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models

The pre-training and fine-tuning paradigm has contributed to a number of...
research
03/11/2022

TAPE: Task-Agnostic Prior Embedding for Image Restoration

Learning an generalized prior for natural image restoration is an import...
research
05/23/2023

Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding

Scientific literature understanding tasks have gained significant attent...
research
06/01/2022

Task-Specific Expert Pruning for Sparse Mixture-of-Experts

The sparse Mixture-of-Experts (MoE) model is powerful for large-scale pr...
research
11/17/2022

Uni-Perceiver v2: A Generalist Model for Large-Scale Vision and Vision-Language Tasks

Despite the remarkable success of foundation models, their task-specific...
research
04/20/2018

Efficient Contextualized Representation: Language Model Pruning for Sequence Labeling

Many efforts have been made to facilitate natural language processing ta...
research
05/12/2022

SimCPSR: Simple Contrastive Learning for Paper Submission Recommendation System

The recommendation system plays a vital role in many areas, especially a...

Please sign up or login with your details

Forgot password? Click here to reset