Resurrecting Submodularity in Neural Abstractive Summarization

11/08/2019
by   Simeng Han, et al.
0

Submodularity is a desirable property for a variety of objectives in summarization in terms of content selection where the encode-decoder framework is deficient. We propose `diminishing attentions', a class of novel attention mechanisms that are architecturally simple yet empirically effective to improve the coverage of neural abstractive summarization by exploiting the properties of submodular functions. Without adding any extra parameters to the Pointer-Generator baseline, our attention mechanism yields significant improvements in ROUGE scores and generates summaries of better quality. Our method within the Pointer-Generator framework outperforms the recently proposed Transformer model for summarization while using only 5 times less parameters. Our method also achieves state-of-the-art results in abstractive summarization when applied to the encoder-decoder attention in the Transformer model initialized with BERT.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/05/2022

Abstractive summarization of hospitalisation histories with transformer networks

In this paper we present a novel approach to abstractive summarization o...
research
09/08/2021

Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems

Transformer models have achieved state-of-the-art results in a wide rang...
research
04/06/2021

Attention Head Masking for Inference Time Content Selection in Abstractive Summarization

How can we effectively inform content selection in Transformer-based abs...
research
04/18/2019

Point-less: More Abstractive Summarization with Pointer-Generator Networks

The Pointer-Generator architecture has shown to be a big improvement for...
research
09/01/2019

Repurposing Decoder-Transformer Language Models for Abstractive Summarization

Neural network models have shown excellent fluency and performance when ...
research
09/12/2018

Closed-Book Training to Improve Summarization Encoder Memory

A good neural sequence-to-sequence summarization model should have a str...
research
09/15/2020

Attention-Aware Inference for Neural Abstractive Summarization

Inspired by Google's Neural Machine Translation (NMT) <cit.> that models...

Please sign up or login with your details

Forgot password? Click here to reset