TRUE: Re-evaluating Factual Consistency Evaluation

04/11/2022
by   Or Honovich, et al.
2

Grounded text generation systems often generate text that contains factual inconsistencies, hindering their real-world applicability. Automatic factual consistency evaluation may help alleviate this limitation by accelerating evaluation cycles, filtering inconsistent outputs and augmenting training data. While attracting increasing attention, such evaluation metrics are usually developed and evaluated in silo for a single task or dataset, slowing their adoption. Moreover, previous meta-evaluation protocols focused on system-level correlations with human annotations, which leave the example-level accuracy of such metrics unclear. In this work, we introduce TRUE: a comprehensive study of factual consistency metrics on a standardized collection of existing texts from diverse tasks, manually annotated for factual consistency. Our standardization enables an example-level meta-evaluation protocol that is more actionable and interpretable than previously reported correlations, yielding clearer quality measures. Across diverse state-of-the-art metrics and 11 datasets we find that large-scale NLI and question generation-and-answering-based approaches achieve strong and complementary results. We recommend those methods as a starting point for model and metric developers, and hope TRUE will foster progress towards even better methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2020

Go Figure! A Meta Evaluation of Factuality in Summarization

Text generation models can generate factually inconsistent text containi...
research
04/16/2021

Q^2: Evaluating Factual Consistency in Knowledge-Grounded Dialogues via Question Generation and Question Answering

Neural knowledge-grounded generative models for dialogue often produce c...
research
05/23/2023

INSTRUCTSCORE: Towards Explainable Text Generation Evaluation with Automatic Feedback

The field of automatic evaluation of text generation made tremendous pro...
research
06/20/2023

Open-Domain Text Evaluation via Meta Distribution Modeling

Recent advances in open-domain text generation models powered by large p...
research
05/23/2023

APPLS: A Meta-evaluation Testbed for Plain Language Summarization

While there has been significant development of models for Plain Languag...
research
05/26/2023

AlignScore: Evaluating Factual Consistency with a Unified Alignment Function

Many text generation applications require the generated text to be factu...
research
05/26/2023

With a Little Push, NLI Models can Robustly and Efficiently Predict Faithfulness

Conditional language models still generate unfaithful output that is not...

Please sign up or login with your details

Forgot password? Click here to reset