DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

03/21/2022
by   Zheng Li, et al.
0

Large-scale pre-trained sequence-to-sequence models like BART and T5 achieve state-of-the-art performance on many generative NLP tasks. However, such models pose a great challenge in resource-constrained scenarios owing to their large memory requirements and high latency. To alleviate this issue, we propose to jointly distill and quantize the model, where knowledge is transferred from the full-precision teacher model to the quantized and distilled low-precision student model. Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16.5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets. We further pushed the limit of compression ratio to 27.7x and presented the performance-efficiency trade-off for generative tasks using pre-trained models. To the best of our knowledge, this is the first work aiming to effectively distill and quantize sequence-to-sequence pre-trained models for language generation tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/20/2021

BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

We present BARTpho with two versions – BARTpho_word and BARTpho_syllable...
research
04/04/2020

STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization

Abstractive summarization aims to rewrite a long document to its shorter...
research
05/22/2020

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks

Large pre-trained language models have been shown to store factual knowl...
research
12/31/2020

FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation

Natural language (NL) explanations of model predictions are gaining popu...
research
09/09/2022

Enhancing Pre-trained Models with Text Structure Knowledge for Question Generation

Today the pre-trained language models achieve great success for question...
research
08/07/2021

Tiny Neural Models for Seq2Seq

Semantic parsing models with applications in task oriented dialog system...
research
07/10/2023

KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for Radiology Report Summarization

In this paper, we introduce CheXOFA, a new pre-trained vision-language m...

Please sign up or login with your details

Forgot password? Click here to reset