Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents by Sampling Summary Views

05/25/2022
by   Marcio Fonseca, et al.
0

We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers. Our method, FactorSum, does this disentanglement by factorizing summarization into two steps through an energy function: (1) generation of abstractive summary views; (2) combination of these views into a final summary, following a budget and content guidance. This guidance may come from different sources, including from an advisor model such as BART or BigBird, or in oracle mode – from the reference. This factorization achieves significantly higher ROUGE scores on multiple benchmarks for long document summarization, namely PubMed, arXiv, and GovReport. Most notably, our model is effective for domain adaptation. When trained only on PubMed samples, it achieves a 46.29 ROUGE-1 score on arXiv, which indicates a strong performance due to more flexible budget adaptation and content selection less dependent on domain-specific textual structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/12/2019

What comes next? Extractive summarization by next-sentence prediction

Existing approaches to automatic summarization assume that a length limi...
research
05/14/2019

Ontology-Aware Clinical Abstractive Summarization

Automatically generating accurate summaries from clinical reports could ...
research
06/26/2021

A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance and Self-referenced Redundancy

In recent years, reference-based and supervised summarization evaluation...
research
03/21/2022

HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long Document Summarization

Document structure is critical for efficient information consumption. Ho...
research
10/22/2022

Salience Allocation as Guidance for Abstractive Summarization

Abstractive summarization models typically learn to capture the salient ...
research
07/21/2017

A Pilot Study of Domain Adaptation Effect for Neural Abstractive Summarization

We study the problem of domain adaptation for neural abstractive summari...
research
10/15/2020

Compressive Summarization with Plausibility and Salience Modeling

Compressive summarization systems typically rely on a crafted set of syn...

Please sign up or login with your details

Forgot password? Click here to reset