BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization

06/12/2019
by   Kai Wang, et al.
0

The success of neural summarization models stems from the meticulous encodings of source articles. To overcome the impediments of limited and sometimes noisy training data, one promising direction is to make better use of the available training data by applying filters during summarization. In this paper, we propose a novel Bi-directional Selective Encoding with Template (BiSET) model, which leverages template discovered from training data to softly select key information from each source article to guide its summarization process. Extensive experiments on a standard summarization dataset were conducted and the results show that the template-equipped BiSET model manages to improve the summarization performance significantly with a new state of the art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2017

Selective Encoding for Abstractive Sentence Summarization

We propose a selective encoding model to extend the sequence-to-sequence...
research
05/19/2019

Structured Summarization of Academic Publications

We propose SUSIE, a novel summarization method that can work with state-...
research
02/22/2023

Guiding Large Language Models via Directional Stimulus Prompting

We introduce a new framework, Directional Stimulus Prompting, that uses ...
research
01/14/2022

ExtraPhrase: Efficient Data Augmentation for Abstractive Summarization

Neural models trained with large amount of parallel data have achieved i...
research
05/10/2018

Global Encoding for Abstractive Summarization

In neural abstractive summarization, the conventional sequence-to-sequen...
research
07/08/2019

Searching for Effective Neural Extractive Summarization: What Works and What's Next

The recent years have seen remarkable success in the use of deep neural ...

Please sign up or login with your details

Forgot password? Click here to reset