Salience Estimation with Multi-Attention Learning for Abstractive Text Summarization

04/07/2020
by   Piji Li, et al.
0

Attention mechanism plays a dominant role in the sequence generation models and has been used to improve the performance of machine translation and abstractive text summarization. Different from neural machine translation, in the task of text summarization, salience estimation for words, phrases or sentences is a critical component, since the output summary is a distillation of the input text. Although the typical attention mechanism can conduct text fragment selection from the input text conditioned on the decoder states, there is still a gap to conduct direct and effective salience detection. To bring back direct salience estimation for summarization with neural networks, we propose a Multi-Attention Learning framework which contains two new attention learning components for salience estimation: supervised attention learning and unsupervised attention learning. We regard the attention weights as the salience information, which means that the semantic units with large attention value will be more important. The context information obtained based on the estimated salience is incorporated with the typical attention mechanism in the decoder to conduct summary generation. Extensive experiments on some benchmark datasets in different languages demonstrate the effectiveness of the proposed framework for the task of abstractive summarization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2019

Contrastive Attention Mechanism for Abstractive Sentence Summarization

We propose a contrastive attention mechanism to extend the sequence-to-s...
research
02/24/2020

GRET: Global Representation Enhanced Transformer

Transformer, based on the encoder-decoder framework, has achieved state-...
research
02/25/2023

Abstractive Text Summarization using Attentive GRU based Encoder-Decoder

In todays era huge volume of information exists everywhere. Therefore, i...
research
08/22/2018

Learning When to Concentrate or Divert Attention: Self-Adaptive Attention Temperature for Neural Machine Translation

Most of the Neural Machine Translation (NMT) models are based on the seq...
research
11/12/2018

An Introductory Survey on Attention Mechanisms in NLP Problems

First derived from human intuition, later adapted to machine translation...
research
11/10/2019

Modelling Bahdanau Attention using Election methods aided by Q-Learning

Neural Machine Translation has lately gained a lot of "attention" with t...
research
05/22/2017

A Regularized Framework for Sparse and Structured Neural Attention

Modern neural networks are often augmented with an attention mechanism, ...

Please sign up or login with your details

Forgot password? Click here to reset