Word Embedding Attention Network: Generating Words by Querying Distributed Word Representations for Paraphrase Generation

03/05/2018
by   Shuming Ma, et al.
0

Most recent approaches use the sequence-to-sequence model for paraphrase generation. The existing sequence-to-sequence model tends to memorize the words and the patterns in the training dataset instead of learning the meaning of the words. Therefore, the generated sentences are often grammatically correct but semantically improper. In this work, we introduce a novel model based on the encoder-decoder framework, called Word Embedding Attention Network (WEAN). Our proposed model generates the words by querying distributed word representations (i.e. neural word embeddings), hoping to capturing the meaning of the according words. Following previous work, we evaluate our model on two paraphrase-oriented tasks, namely text simplification and short text abstractive summarization. Experimental results show that our model outperforms the sequence-to-sequence baseline by the BLEU score of 6.3 and 5.5 on two English text simplification datasets, and the ROUGE-2 F1 score of 5.7 on a Chinese summarization dataset. Moreover, our model achieves state-of-the-art performances on these three benchmark datasets.

READ FULL TEXT
research
03/05/2018

Query and Output: Generating Words by Querying Distributed Word Representations for Paraphrase Generation

Most recent approaches use the sequence-to-sequence model for paraphrase...
research
02/04/2020

Syntactically Look-Ahead Attention Network for Sentence Compression

Sentence compression is the task of compressing a long sentence into a s...
research
08/19/2016

Learning to Start for Sequence to Sequence Architecture

The sequence to sequence architecture is widely used in the response gen...
research
11/30/2019

Tag Recommendation by Word-Level Tag Sequence Modeling

In this paper, we transform tag recommendation into a word-based text ge...
research
08/18/2017

Agree to Disagree: Improving Disagreement Detection with Dual GRUs

This paper presents models for detecting agreement/disagreement in onlin...
research
05/31/2021

Reinforced Generative Adversarial Network for Abstractive Text Summarization

Sequence-to-sequence models provide a viable new approach to generative ...
research
06/05/2019

Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

In multi-participant postings, as in online chat conversations, several ...

Please sign up or login with your details

Forgot password? Click here to reset