Sparsity and Sentence Structure in Encoder-Decoder Attention of Summarization Systems

09/08/2021
by   Potsawee Manakul, et al.
0

Transformer models have achieved state-of-the-art results in a wide range of NLP tasks including summarization. Training and inference using large transformer models can be computationally expensive. Previous work has focused on one important bottleneck, the quadratic self-attention mechanism in the encoder. Modified encoder architectures such as LED or LoBART use local attention patterns to address this problem for summarization. In contrast, this work focuses on the transformer's encoder-decoder attention mechanism. The cost of this attention becomes more significant in inference or training approaches that require model-generated histories. First, we examine the complexity of the encoder-decoder attention. We demonstrate empirically that there is a sparse sentence structure in document summarization that can be exploited by constraining the attention mechanism to a subset of input sentences, whilst maintaining system performance. Second, we propose a modified architecture that selects the subset of sentences to constrain the encoder-decoder attention. Experiments are carried out on abstractive summarization tasks, including CNN/DailyMail, XSum, Spotify Podcast, and arXiv.

READ FULL TEXT
research
11/25/2021

New Approaches to Long Document Summarization: Fourier Transform Based Attention in a Transformer Model

In this work, we extensively redesign the newly introduced method of tok...
research
05/08/2021

Long-Span Dependencies in Transformer-based Summarization Systems

Transformer-based models have achieved state-of-the-art results in a wid...
research
04/06/2021

Attention Head Masking for Inference Time Content Selection in Abstractive Summarization

How can we effectively inform content selection in Transformer-based abs...
research
06/09/2020

Graph-Aware Transformer: Is Attention All Graphs Need?

Graphs are the natural data structure to represent relational and struct...
research
06/11/2021

Zero-Shot Controlled Generation with Encoder-Decoder Transformers

Controlling neural network-based models for natural language generation ...
research
10/20/2016

Jointly Learning to Align and Convert Graphemes to Phonemes with Neural Attention Models

We propose an attention-enabled encoder-decoder model for the problem of...
research
11/08/2019

Resurrecting Submodularity in Neural Abstractive Summarization

Submodularity is a desirable property for a variety of objectives in sum...

Please sign up or login with your details

Forgot password? Click here to reset