TempoSum: Evaluating the Temporal Generalization of Abstractive Summarization

05/03/2023
by   Chi Seng Cheang, et al.
0

Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/02/2023

Better Language Models of Code through Self-Improvement

Pre-trained language models for code (PLMCs) have gained attention in re...
research
10/11/2020

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

Neural network-based models augmented with unsupervised pre-trained know...
research
05/11/2023

PROM: A Phrase-level Copying Mechanism with Pre-training for Abstractive Summarization

Based on the remarkable achievements of pre-trained language models in a...
research
12/20/2022

mFACE: Multilingual Summarization with Factual Consistency Evaluation

Abstractive summarization has enjoyed renewed interest in recent years, ...
research
10/15/2021

Training Dynamics for Text Summarization Models

Pre-trained language models (e.g. BART) have shown impressive results wh...
research
08/06/2023

PromptSum: Parameter-Efficient Controllable Abstractive Summarization

Prompt tuning (PT), a parameter-efficient technique that only tunes the ...
research
05/22/2023

Are Large Language Models Good Evaluators for Abstractive Summarization?

Human evaluations are often required for abstractive summary evaluations...

Please sign up or login with your details

Forgot password? Click here to reset