Neural Paraphrase Generation with Stacked Residual LSTM Networks

10/10/2016
by   Aaditya Prakash, et al.
0

In this paper, we propose a novel neural approach for paraphrase generation. Conventional para- phrase generation methods either leverage hand-written rules and thesauri-based alignments, or use statistical machine learning principles. To the best of our knowledge, this work is the first to explore deep learning models for paraphrase generation. Our primary contribution is a stacked residual LSTM network, where we add residual connections between LSTM layers. This allows for efficient training of deep LSTMs. We evaluate our model and other state-of-the-art deep learning models on three different datasets: PPDB, WikiAnswers and MSCOCO. Evaluation results demonstrate that our model outperforms sequence to sequence, attention-based and bi- directional LSTM models on BLEU, METEOR, TER and an embedding-based sentence similarity metric.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/07/2018

Cell-aware Stacked LSTMs for Modeling Sentences

We propose a method of stacking multiple long short-term memory (LSTM) l...
research
06/14/2016

Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation

Neural machine translation (NMT) aims at solving machine translation (MT...
research
05/07/2018

Sentence-State LSTM for Text Representation

Bi-directional LSTMs are a powerful tool for text representation. On the...
research
12/28/2017

Siamese LSTM based Fiber Structural Similarity Network (FS2Net) for Rotation Invariant Brain Tractography Segmentation

In this paper, we propose a novel deep learning architecture combining s...
research
12/03/2020

BERT-hLSTMs: BERT and Hierarchical LSTMs for Visual Storytelling

Visual storytelling is a creative and challenging task, aiming to automa...
research
04/21/2016

Chinese Song Iambics Generation with Neural Attention-based Model

Learning and generating Chinese poems is a charming yet challenging task...
research
01/03/2017

Shortcut Sequence Tagging

Deep stacked RNNs are usually hard to train. Adding shortcut connections...

Please sign up or login with your details

Forgot password? Click here to reset