Evaluating the Factual Consistency of Large Language Models Through Summarization

11/15/2022
by   Derek Tam, et al.
6

While large language models (LLMs) have proven to be effective on a large variety of tasks, they are also known to hallucinate information. To measure whether an LLM prefers factually consistent continuations of its input, we propose a new benchmark called FIB(Factual Inconsistency Benchmark) that focuses on the task of summarization. Specifically, our benchmark involves comparing the scores an LLM assigns to a factually consistent versus a factually inconsistent summary for an input news article. For factually consistent summaries, we use human-written reference summaries that we manually verify as factually consistent. To generate summaries that are factually inconsistent, we generate summaries from a suite of summarization models that we have manually annotated as factually inconsistent. A model's factual consistency is then measured according to its accuracy, i.e. the proportion of documents where it assigns a higher score to the factually consistent summary. To validate the usefulness of FIB, we evaluate 23 large language models ranging from 1B to 176B parameters from six different model families including BLOOM and OPT. We find that existing LLMs generally assign a higher score to factually consistent summaries than to factually inconsistent summaries. However, if the factually inconsistent summaries occur verbatim in the document, then LLMs assign a higher score to these factually inconsistent summaries than factually consistent summaries. We validate design choices in our benchmark including the scoring method and source of distractor summaries. Our code and benchmark data can be found at https://github.com/r-three/fib.

READ FULL TEXT
research
05/04/2022

Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking

Despite the recent advances in abstractive summarization systems, it is ...
research
05/23/2023

Evaluating Factual Consistency of Summaries with Large Language Models

Detecting factual errors in summaries has been an important and challeng...
research
11/18/2021

SummaC: Re-Visiting NLI-based Models for Inconsistency Detection in Summarization

In the summarization domain, a key requirement for summaries is to be fa...
research
05/22/2023

Are Large Language Models Good Evaluators for Abstractive Summarization?

Human evaluations are often required for abstractive summary evaluations...
research
09/20/2019

Towards Neural Language Evaluators

We review three limitations of BLEU and ROUGE -- the most popular metric...
research
10/28/2019

Evaluating the Factual Consistency of Abstractive Text Summarization

Currently used metrics for assessing summarization algorithms do not acc...
research
12/20/2022

BUMP: A Benchmark of Unfaithful Minimal Pairs for Meta-Evaluation of Faithfulness Metrics

The proliferation of automatic faithfulness metrics for summarization ha...

Please sign up or login with your details

Forgot password? Click here to reset