Towards Improving Faithfulness in Abstractive Summarization

10/04/2022
by   Xiuying Chen, et al.
0

Despite the success achieved in neural abstractive summarization based on pre-trained language models, one unresolved issue is that the generated summaries are not always faithful to the input document. There are two possible causes of the unfaithfulness problem: (1) the summarization model fails to understand or capture the gist of the input text, and (2) the model over-relies on the language model to generate fluent but inadequate words. In this work, we propose a Faithfulness Enhanced Summarization model (FES), which is designed for addressing these two problems and improving faithfulness in abstractive summarization. For the first problem, we propose to use question-answering (QA) to examine whether the encoder fully grasps the input document and can answer the questions on the key information in the input. The QA attention on the proper input words can also be used to stipulate how the decoder should attend to the source. For the second problem, we introduce a max-margin loss defined on the difference between the language and the summarization model, aiming to prevent the overconfidence of the language model. Extensive experiments on two benchmark summarization datasets, CNN/DM and XSum, demonstrate that our model significantly outperforms strong baselines. The evaluation of factual consistency also shows that our model generates more faithful summaries than baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/15/2021

Question-Based Salient Span Selection for More Controllable Text Summarization

In this work, we propose a method for incorporating question-answering (...
research
05/10/2021

Improving Factual Consistency of Abstractive Summarization via Question Answering

A commonly observed problem with the state-of-the art abstractive summar...
research
09/16/2021

RetrievalSum: A Retrieval Enhanced Framework for Abstractive Summarization

Existing summarization systems mostly generate summaries purely relying ...
research
06/01/2023

Improving the Robustness of Summarization Systems with Dual Augmentation

A robust summarization system should be able to capture the gist of the ...
research
06/03/2021

Dissecting Generation Modes for Abstractive Summarization Models via Ablation and Attribution

Despite the prominence of neural abstractive summarization models, we kn...
research
09/14/2023

Investigating Gender Bias in News Summarization

Summarization is an important application of large language models (LLMs...
research
04/21/2020

Neural Abstractive Summarization with Structural Attention

Attentional, RNN-based encoder-decoder architectures have achieved impre...

Please sign up or login with your details

Forgot password? Click here to reset