Attention-Aware Inference for Neural Abstractive Summarization

09/15/2020
by   Ye Ma, et al.
0

Inspired by Google's Neural Machine Translation (NMT) <cit.> that models the one-to-one alignment in translation tasks with an optimal uniform attention distribution during the inference, this study proposes an attention-aware inference algorithm for Neural Abstractive Summarization (NAS) to regulate generated summaries to attend to source paragraphs/sentences with the optimal coverage. Unlike NMT, the attention-aware inference of NAS requires the prediction of the optimal attention distribution. Therefore, an attention-prediction model is constructed to learn the dependency between attention weights and sources. To apply the attention-aware inference on multi-document summarization, a Hierarchical Transformer (HT) is developed to accept lengthy inputs at the same time project cross-document information. Experiments on WikiSum <cit.> suggest that the proposed HT already outperforms other strong Transformer-based baselines. By refining the regular beam search with the attention-aware inference, significant improvements on the quality of summaries could be further observed. Last but not the least, the attention-aware inference could be adopted to single-document summarization with straightforward modifications according to the model architecture.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/16/2022

Parallel Hierarchical Transformer with Attention Alignment for Abstractive Multi-Document Summarization

In comparison to single-document summarization, abstractive Multi-Docume...
research
09/05/2018

Document-Level Neural Machine Translation with Hierarchical Attention Networks

Neural Machine Translation (NMT) can be improved by including document-l...
research
09/13/2022

Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization

One key challenge in multi-document summarization is to capture the rela...
research
11/08/2019

Resurrecting Submodularity in Neural Abstractive Summarization

Submodularity is a desirable property for a variety of objectives in sum...
research
09/24/2019

In Conclusion Not Repetition: Comprehensive Abstractive Summarization With Diversified Attention Based On Determinantal Point Processes

Various Seq2Seq learning models designed for machine translation were ap...
research
09/19/2020

Long-Short Term Masking Transformer: A Simple but Effective Baseline for Document-level Neural Machine Translation

Many document-level neural machine translation (NMT) systems have explor...
research
08/19/2020

Transformer based Multilingual document Embedding model

One of the current state-of-the-art multilingual document embedding mode...

Please sign up or login with your details

Forgot password? Click here to reset