DeepCuts: Single-Shot Interpretability based Pruning for BERT

12/27/2022
by   Jasdeep Singh Grover, et al.
0

As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.

READ FULL TEXT
research
11/10/2022

BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning

Current pre-trained language models rely on large datasets for achieving...
research
04/18/2021

Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm

Transformer-based pre-trained language models have significantly improve...
research
11/28/2020

Understanding How BERT Learns to Identify Edits

Pre-trained transformer language models such as BERT are ubiquitous in N...
research
02/07/2023

ZipLM: Hardware-Aware Structured Pruning of Language Models

The breakthrough performance of large language models (LLMs) comes with ...
research
10/31/2019

Pseudolikelihood Reranking with Masked Language Models

We rerank with scores from pretrained masked language models like BERT t...
research
09/17/2021

New Students on Sesame Street: What Order-Aware Matrix Embeddings Can Learn from BERT

Large-scale pretrained language models (PreLMs) are revolutionizing natu...
research
11/28/2020

FreezeNet: Full Performance by Reduced Storage Costs

Pruning generates sparse networks by setting parameters to zero. In this...

Please sign up or login with your details

Forgot password? Click here to reset