Learning with Rejection for Abstractive Text Summarization

02/16/2023
by   Meng Cao, et al.
0

State-of-the-art abstractive summarization systems frequently hallucinate content that is not supported by the source document, mainly due to noise in the training dataset. Existing methods opt to drop the noisy samples or tokens from the training set entirely, reducing the effective training set size and creating an artificial propensity to copy words from the source. In this work, we propose a training objective for abstractive summarization based on rejection learning, in which the model learns whether or not to reject potentially noisy tokens. We further propose a regularized decoding objective that penalizes non-factual candidate summaries during inference by using the rejection probability learned during training. We show that our method considerably improves the factuality of generated summaries in automatic and human evaluations when compared to five baseline models and that it does so while increasing the abstractiveness of the generated summaries.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation

Summaries generated by abstractive summarization are supposed to only co...
research
11/03/2022

Latent Prompt Tuning for Text Summarization

Prompts with different control signals (e.g., length, keywords, etc.) ca...
research
10/24/2022

Mutual Information Alleviates Hallucinations in Abstractive Summarization

Despite significant progress in the quality of language generated from a...
research
03/19/2020

Boosting Factual Correctness of Abstractive Summarization

A commonly observed problem with abstractive summarization is the distor...
research
07/23/2023

Evaluating Emotional Nuances in Dialogue Summarization

Automatic dialogue summarization is a well-established task that aims to...
research
05/25/2021

Focus Attention: Promoting Faithfulness and Diversity in Summarization

Professional summaries are written with document-level information, such...
research
08/26/2021

Alleviating Exposure Bias via Contrastive Learning for Abstractive Text Summarization

Encoder-decoder models have achieved remarkable success in abstractive t...

Please sign up or login with your details

Forgot password? Click here to reset