SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization

03/13/2022
by   Mathieu Ravaut, et al.
6

Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset. These models are typically decoded with beam search to generate a unique summary. However, the search space is very large, and with the exposure bias, such decoding is not optimal. In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates. Our mixture-of-experts SummaReranker learns to select a better candidate and consistently improves the performance of the base model. With a base PEGASUS, we push ROUGE scores by 5.44 on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art. Our code and checkpoints will be available at https://github.com/ntunlp/SummaReranker.

READ FULL TEXT

page 4

page 9

page 15

research
10/17/2022

Towards Summary Candidates Fusion

Sequence-to-sequence deep neural models fine-tuned for abstractive summa...
research
09/20/2021

BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese

We present BARTpho with two versions – BARTpho_word and BARTpho_syllable...
research
09/20/2022

Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models

Prompting, which casts downstream applications as language modeling task...
research
05/17/2023

Balancing Lexical and Semantic Quality in Abstractive Summarization

An important problem of the sequence-to-sequence neural models widely us...
research
05/28/2023

Generating EDU Extracts for Plan-Guided Summary Re-Ranking

Two-step approaches, in which summary candidates are generated-then-rera...
research
08/17/2022

An Efficient Coarse-to-Fine Facet-Aware Unsupervised Summarization Framework based on Semantic Blocks

Unsupervised summarization methods have achieved remarkable results by i...
research
09/11/2023

Pushing Mixture of Experts to the Limit: Extremely Parameter Efficient MoE for Instruction Tuning

The Mixture of Experts (MoE) is a widely known neural architecture where...

Please sign up or login with your details

Forgot password? Click here to reset