Controlling the Amount of Verbatim Copying in Abstractive Summarization

11/23/2019
by   Kaiqiang Song, et al.
0

An abstract must not change the meaning of the original text. A single most effective way to achieve that is to increase the amount of copying while still allowing for text abstraction. Human editors can usually exercise control over copying, resulting in summaries that are more extractive than abstractive, or vice versa. However, it remains poorly understood whether modern neural abstractive summarizers can provide the same flexibility, i.e., learning from single reference summaries to generate multiple summary hypotheses with varying degrees of copying. In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones. We frame the task of summarization as language modeling and exploit alternative mechanisms to generate summary hypotheses. Our method allows for control over copying during both training and decoding stages of a neural summarization model. Through extensive experiments we illustrate the significance of our proposed method on controlling the amount of verbatim copying and achieve competitive results over strong baselines. Our analysis further reveals interesting and unobvious facts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2021

A New Approach to Overgenerating and Scoring Abstractive Summaries

We propose a new approach to generate multiple variants of the target su...
research
10/20/2020

Better Highlighting: Creating Sub-Sentence Summary Highlights

Amongst the best means to summarize is highlighting. In this paper, we a...
research
03/19/2018

Controlling Decoding for More Abstractive Summaries with Copy-Based Networks

Attention-based neural abstractive summarization systems equipped with c...
research
04/30/2021

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey

Recently, various neural encoder-decoder models pioneered by Seq2Seq fra...
research
09/22/2021

Enriching and Controlling Global Semantics for Text Summarization

Recently, Transformer-based models have been proven effective in the abs...
research
04/19/2021

Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

Despite significant progress in neural abstractive summarization, recent...
research
11/13/2017

Faithful to the Original: Fact Aware Neural Abstractive Summarization

Unlike extractive summarization, abstractive summarization has to fuse d...

Please sign up or login with your details

Forgot password? Click here to reset