TrueTeacher: Learning Factual Consistency Evaluation with Large Language Models

05/18/2023
by   Zorik Gekhman, et al.
0

Factual consistency evaluation is often conducted using Natural Language Inference (NLI) models, yet these models exhibit limited success in evaluating summaries. Previous work improved such models with synthetic training data. However, the data is typically based on perturbed human-written summaries, which often differ in their characteristics from real model-generated summaries and have limited coverage of possible factual errors. Alternatively, large language models (LLMs) have recently shown promising results in directly evaluating generative tasks, but are too computationally expensive for practical use. Motivated by these limitations, we introduce TrueTeacher, a method for generating synthetic data by annotating diverse model-generated summaries using a LLM. Unlike prior work, TrueTeacher does not rely on human-written summaries, and is multilingual by nature. Experiments on the TRUE benchmark show that a student model trained using our data, substantially outperforms both the state-of-the-art model with similar capacity, and the LLM teacher. In a systematic study, we compare TrueTeacher to existing synthetic data generation methods and demonstrate its superiority and robustness to domain-shift. Using the the mFACE dataset, we also show that our method generalizes to multilingual scenarios. Finally, we release a large-scale synthetic dataset with 1.4M examples generated using TrueTeacher.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/20/2022

mFACE: Multilingual Summarization with Factual Consistency Evaluation

Abstractive summarization has enjoyed renewed interest in recent years, ...
research
05/23/2023

Evaluating Factual Consistency of Summaries with Large Language Models

Detecting factual errors in summaries has been an important and challeng...
research
06/12/2022

Self-critiquing models for assisting human evaluators

We fine-tune large language models to write natural language critiques (...
research
10/22/2022

Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling

Abstractive summarization models often generate inconsistent summaries c...
research
03/30/2023

Comparing Abstractive Summaries Generated by ChatGPT to Real Summaries Through Blinded Reviewers and Text Classification Algorithms

Large Language Models (LLMs) have gathered significant attention due to ...
research
10/28/2019

Evaluating the Factual Consistency of Abstractive Text Summarization

Currently used metrics for assessing summarization algorithms do not acc...
research
09/20/2019

Towards Neural Language Evaluators

We review three limitations of BLEU and ROUGE -- the most popular metric...

Please sign up or login with your details

Forgot password? Click here to reset