Annotating and Modeling Fine-grained Factuality in Summarization

04/09/2021
by   Tanya Goyal, et al.
0

Recent pre-trained abstractive summarization systems have started to achieve credible performance, but a major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors. While a number of annotated datasets and statistical models for assessing factuality have been explored, there is no clear picture of what errors are most important to target or where current techniques are succeeding and failing. We explore both synthetic and human-labeled data sources for training models to identify factual errors in summarization, and study factuality at the word-, dependency-, and sentence-level. Our observations are threefold. First, exhibited factual errors differ significantly across datasets, and commonly-used training sets of simple synthetic errors do not reflect errors made on abstractive datasets like XSum. Second, human-labeled data with fine-grained annotations provides a more effective training signal than sentence-level annotations or synthetic data. Finally, we show that our best factuality detection model enables training of more factual XSum summarization models by allowing us to identify non-factual tokens in the training data.

READ FULL TEXT
research
05/25/2022

Understanding Factual Errors in Summarization: Errors, Summarizers, Datasets, Error Detectors

The propensity of abstractive summarization systems to make factual erro...
research
05/26/2023

Annotating and Detecting Fine-grained Factual Errors for Dialogue Summarization

A series of datasets and models have been proposed for summaries generat...
research
04/27/2021

Understanding Factuality in Abstractive Summarization with FRANK: A Benchmark for Factuality Metrics

Modern summarization models generate highly fluent but often factually u...
research
10/04/2021

TLDR9+: A Large Scale Resource for Extreme Summarization of Social Media Posts

Recent models in developing summarization systems consist of millions of...
research
12/09/2021

Does Redundancy in AI Perception Systems Help to Test for Super-Human Automated Driving Performance?

While automated driving is often advertised with better-than-human drivi...
research
02/17/2023

Towards Fine-Grained Information: Identifying the Type and Location of Translation Errors

Fine-grained information on translation errors is helpful for the transl...
research
05/23/2023

Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA

Large language models (e.g., GPT-3.5) are uniquely capable of producing ...

Please sign up or login with your details

Forgot password? Click here to reset