Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for Summarization

07/04/2016
by   Minsoo Kim, et al.
0

In this work, we introduce temporal hierarchies to the sequence to sequence (seq2seq) model to tackle the problem of abstractive summarization of scientific articles. The proposed Multiple Timescale model of the Gated Recurrent Unit (MTGRU) is implemented in the encoder-decoder setting to better deal with the presence of multiple compositionalities in larger texts. The proposed model is compared to the conventional RNN encoder-decoder, and the results demonstrate that our model trains faster and shows significant performance gains. The results also show that the temporal hierarchies help improve the ability of seq2seq models to capture compositionalities better without the presence of highly complex architectural hierarchies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2018

Global Encoding for Abstractive Summarization

In neural abstractive summarization, the conventional sequence-to-sequen...
research
12/31/2016

Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization

This paper tackles the reduction of redundant repeating generation that ...
research
04/24/2017

Selective Encoding for Abstractive Sentence Summarization

We propose a selective encoding model to extend the sequence-to-sequence...
research
09/18/2018

Bidirectional Attentional Encoder-Decoder Model and Bidirectional Beam Search for Abstractive Summarization

Sequence generative models with RNN variants, such as LSTM, GRU, show pr...
research
09/08/2018

Simplified Hierarchical Recurrent Encoder-Decoder for Building End-To-End Dialogue Systems

As a generative model for building end-to-end dialogue systems, Hierarch...
research
11/09/2019

Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models

Inspired by modular software design principles of independence, intercha...
research
07/27/2023

Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners

In-context learning, which offers substantial advantages over fine-tunin...

Please sign up or login with your details

Forgot password? Click here to reset