Error-driven Pruning of Language Models for Virtual Assistants

02/14/2021
by   Sashank Gondala, et al.
11

Language models (LMs) for virtual assistants (VAs) are typically trained on large amounts of data, resulting in prohibitively large models which require excessive memory and/or cannot be used to serve user requests in real-time. Entropy pruning results in smaller models but with significant degradation of effectiveness in the tail of the user request distribution. We customize entropy pruning by allowing for a keep list of infrequent n-grams that require a more relaxed pruning threshold, and propose three methods to construct the keep list. Each method has its own advantages and disadvantages with respect to LM size, ASR accuracy and cost of constructing the keep list. Our best LM gives 8 times larger than the baseline. We also propose discriminative methods to reduce the size of the LM while retaining the majority of the WER gains achieved by the largest LM.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/07/2023

Knowledge-preserving Pruning for Pre-trained Language Models without Retraining

Given a pre-trained language model, how can we efficiently compress it w...
research
04/01/2022

Structured Pruning Learns Compact and Accurate Models

The growing size of neural language models has led to increased attentio...
research
09/18/2023

Pruning Large Language Models via Accuracy Predictor

Large language models(LLMs) containing tens of billions of parameters (o...
research
02/07/2023

What Matters In The Structured Pruning of Generative Language Models?

Auto-regressive large language models such as GPT-3 require enormous com...
research
03/26/2023

Task-oriented Memory-efficient Pruning-Adapter

The Outstanding performance and growing size of Large Language Models ha...
research
05/24/2023

PruMUX: Augmenting Data Multiplexing with Model Compression

As language models increase in size by the day, methods for efficient in...
research
05/21/2023

Pruning Pre-trained Language Models with Principled Importance and Self-regularization

Iterative pruning is one of the most effective compression methods for p...

Please sign up or login with your details

Forgot password? Click here to reset