Improving Multi-Document Summarization through Referenced Flexible Extraction with Credit-Awareness

05/04/2022
by   Yun-Zhu Song, et al.
0

A notable challenge in Multi-Document Summarization (MDS) is the extremely-long length of the input. In this paper, we present an extract-then-abstract Transformer framework to overcome the problem. Specifically, we leverage pre-trained language models to construct a hierarchical extractor for salient sentence selection across documents and an abstractor for rewriting the selected contents as summaries. However, learning such a framework is challenging since the optimal contents for the abstractor are generally unknown. Previous works typically create pseudo extraction oracle to enable the supervised learning for both the extractor and the abstractor. Nevertheless, we argue that the performance of such methods could be restricted due to the insufficient information for prediction and inconsistent objectives between training and testing. To this end, we propose a loss weighting mechanism that makes the model aware of the unequal importance for the sentences not in the pseudo extraction oracle, and leverage the fine-tuned abstractor to generate summary references as auxiliary signals for learning the extractor. Moreover, we propose a reinforcement learning method that can efficiently apply to the extractor for harmonizing the optimization between training and testing. Experiment results show that our framework substantially outperforms strong baselines with comparable model sizes and achieves the best results on the Multi-News, Multi-XScience, and WikiCatSum corpora.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2021

A Condense-then-Select Strategy for Text Summarization

Select-then-compress is a popular hybrid, framework for text summarizati...
research
04/17/2021

Transductive Learning for Abstractive News Summarization

Pre-trained language models have recently advanced abstractive summariza...
research
12/14/2021

Reinforcing Semantic-Symmetry for Document Summarization

Document summarization condenses a long document into a short version wi...
research
05/20/2020

Leveraging Graph to Improve Abstractive Multi-Document Summarization

Graphs that capture relations between textual units have great benefits ...
research
06/09/2020

Combination of abstractive and extractive approaches for summarization of long scientific texts

In this research work, we present a method to generate summaries of long...
research
12/16/2021

UniREx: A Unified Learning Framework for Language Model Rationale Extraction

An extractive rationale explains a language model's (LM's) prediction on...
research
04/24/2020

Exploring Explainable Selection to Control Abstractive Generation

It is a big challenge to model long-range input for document summarizati...

Please sign up or login with your details

Forgot password? Click here to reset