Reinforcing Semantic-Symmetry for Document Summarization

12/14/2021
by   Mingyang Song, et al.
0

Document summarization condenses a long document into a short version with salient information and accurate semantic descriptions. The main issue is how to make the output summary semantically consistent with the input document. To reach this goal, recently, researchers have focused on supervised end-to-end hybrid approaches, which contain an extractor module and abstractor module. Among them, the extractor identifies the salient sentences from the input document, and the abstractor generates a summary from the salient sentences. This model successfully keeps the consistency between the generated summary and the reference summary via various strategies (e.g., reinforcement learning). There are two semantic gaps when training the hybrid model (one is between document and extracted sentences, and the other is between extracted sentences and summary). However, they are not explicitly considered in the existing methods, which usually results in a semantic bias of summary. To mitigate the above issue, in this paper, a new reinforcing semantic-symmetry learning model is proposed for document summarization (ReSyM). ReSyM introduces a semantic-consistency reward in the extractor to bridge the first gap. A semantic dual-reward is designed to bridge the second gap in the abstractor. The whole document summarization process is implemented via reinforcement learning with a hybrid reward mechanism (combining the above two rewards). Moreover, a comprehensive sentence representation learning method is presented to sufficiently capture the information from the original document. A series of experiments have been conducted on two wildly used benchmark datasets CNN/Daily Mail and BigPatent. The results have shown the superiority of ReSyM by comparing it with the state-of-the-art baselines in terms of various evaluation metrics.

READ FULL TEXT

page 6

page 9

research
06/19/2021

A Condense-then-Select Strategy for Text Summarization

Select-then-compress is a popular hybrid, framework for text summarizati...
research
12/14/2021

Reinforced Abstractive Summarization with Adaptive Length Controlling

Document summarization, as a fundamental task in natural language genera...
research
12/25/2019

Unity in Diversity: Learning Distributed Heterogeneous Sentence Representation for Extractive Summarization

Automated multi-document extractive text summarization is a widely studi...
research
05/04/2021

Semantic Extractor-Paraphraser based Abstractive Summarization

The anthology of spoken languages today is inundated with textual inform...
research
05/04/2022

Improving Multi-Document Summarization through Referenced Flexible Extraction with Credit-Awareness

A notable challenge in Multi-Document Summarization (MDS) is the extreme...
research
08/31/2019

Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization

Deep reinforcement learning (RL) has been a commonly-used strategy for t...
research
10/08/2022

EDU-level Extractive Summarization with Varying Summary Lengths

Extractive models usually formulate text summarization as extracting top...

Please sign up or login with your details

Forgot password? Click here to reset