Analyzing the Abstractiveness-Factuality Tradeoff With Nonlinear Abstractiveness Constraints

08/05/2021
by   Markus Dreyer, et al.
0

We analyze the tradeoff between factuality and abstractiveness of summaries. We introduce abstractiveness constraints to control the degree of abstractiveness at decoding time, and we apply this technique to characterize the abstractiveness-factuality tradeoff across multiple widely-studied datasets, using extensive human evaluations. We train a neural summarization model on each dataset and visualize the rates of change in factuality as we gradually increase abstractiveness using our abstractiveness constraints. We observe that, while factuality generally drops with increased abstractiveness, different datasets lead to different rates of factuality decay. We propose new measures to quantify the tradeoff between factuality and abstractiveness, incl. muQAGS, which balances factuality with abstractiveness. We also quantify this tradeoff in previous works, aiming to establish baselines for the abstractiveness-factuality tradeoff that future publications can compare against.

READ FULL TEXT
research
04/30/2018

Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies

We present NEWSROOM, a summarization dataset of 1.3 million articles and...
research
10/24/2020

Constrained Abstractive Summarization: Preserving Factual Consistency with Constrained Generation

Summaries generated by abstractive summarization are supposed to only co...
research
06/24/2022

A Fundamental Limit of Distributed Hypothesis Testing Under Memoryless Quantization

We study a distributed hypothesis testing setup where peripheral nodes s...
research
05/19/2019

Structured Summarization of Academic Publications

We propose SUSIE, a novel summarization method that can work with state-...
research
03/10/2023

Tradeoff of generalization error in unsupervised learning

Finding the optimal model complexity that minimizes the generalization e...
research
10/25/2022

Information Filter upon Diversity-Improved Decoding for Diversity-Faithfulness Tradeoff in NLG

Some Natural Language Generation (NLG) tasks require both faithfulness a...
research
01/31/2020

Approximate Summaries for Why and Why-not Provenance (Extended Version)

Why and why-not provenance have been studied extensively in recent years...

Please sign up or login with your details

Forgot password? Click here to reset