CoP: Factual Inconsistency Detection by Controlling the Preference

12/03/2022
by   Shuaijie She, et al.
0

Abstractive summarization is the process of generating a summary given a document as input. Although significant progress has been made, the factual inconsistency between the document and the generated summary still limits its practical applications. Previous work found that the probabilities assigned by the generation model reflect its preferences for the generated summary, including the preference for factual consistency, and the preference for the language or knowledge prior as well. To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt. More specifically, the framework performs an extra inference step in which a text prompt is introduced as an additional input. In this way, another preference is described by the generation probability of this extra inference process. The difference between the above two preferences, i.e. the difference between the probabilities, could be used as measurements for detecting factual inconsistencies. Interestingly, we found that with the properly designed prompt, our framework could evaluate specific preferences and serve as measurements for fine-grained categories of inconsistency, such as entity-related inconsistency, coreference-related inconsistency, etc. Moreover, our framework could also be extended to the supervised setting to learn better prompt from the labeled data as well. Experiments show that our framework achieves new SOTA results on three factual inconsistency detection tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2021

Entity-level Factual Consistency of Abstractive Text Summarization

A key challenge for abstractive summarization is ensuring factual consis...
research
12/20/2022

On Improving Summarization Factual Consistency from Natural Language Feedback

Despite the recent progress in language generation models, their outputs...
research
06/01/2023

Preference-grounded Token-level Guidance for Language Model Fine-tuning

Aligning language models (LMs) with preferences is an important problem ...
research
08/30/2021

Factual Consistency Evaluation for Text Summarization via Counterfactual Estimation

Despite significant progress has been achieved in text summarization, fa...
research
03/24/2023

SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization

Neural abstractive summarization has been widely studied and achieved gr...
research
12/14/2021

Reinforced Abstractive Summarization with Adaptive Length Controlling

Document summarization, as a fundamental task in natural language genera...

Please sign up or login with your details

Forgot password? Click here to reset