Improving Faithfulness in Abstractive Summarization with Contrast Candidate Generation and Selection

04/19/2021
by   Sihao Chen, et al.
0

Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/08/2015

Multi-Document Summarization via Discriminative Summary Reranking

Existing multi-document summarization systems usually rely on a specific...
research
07/25/2019

Summary Refinement through Denoising

We propose a simple method for post-processing the outputs of a text sum...
research
09/07/2018

Exploiting local and global performance of candidate systems for aggregation of summarization techniques

With an ever growing number of extractive summarization techniques being...
research
02/03/2018

Content based Weighted Consensus Summarization

Multi-document summarization has received a great deal of attention in t...
research
08/31/2021

Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization

Despite recent progress in abstractive summarization, systems still suff...
research
03/31/2022

BRIO: Bringing Order to Abstractive Summarization

Abstractive summarization models are commonly trained using maximum like...
research
11/23/2019

Controlling the Amount of Verbatim Copying in Abstractive Summarization

An abstract must not change the meaning of the original text. A single m...

Please sign up or login with your details

Forgot password? Click here to reset