ReDecode Framework for Iterative Improvement in Paraphrase Generation

11/11/2018
by   Milan Aggarwal, et al.
0

Generating paraphrases, that is, different variations of a sentence conveying the same meaning, is an important yet challenging task in NLP. Automatically generating paraphrases has its utility in many NLP tasks like question answering, information retrieval, conversational systems to name a few. In this paper, we introduce iterative refinement of generated paraphrases within VAE based generation framework. Current sequence generation models lack the capability to (1) make improvements once the sentence is generated; (2) rectify errors made while decoding. We propose a technique to iteratively refine the output using multiple decoders, each one attending on the output sentence generated by the previous decoder. We improve current state of the art results significantly - with over 9 Quora question pairs and MSCOCO datasets respectively. We also show qualitatively through examples that our re-decoding approach generates better paraphrases compared to a single decoder by rectifying errors and making improvements in paraphrase structure, inducing variations and introducing new but semantically coherent information.

READ FULL TEXT

page 6

page 7

research
09/15/2017

A Deep Generative Framework for Paraphrase Generation

Paraphrase generation is an important problem in NLP, especially in ques...
research
10/07/2020

Cross-Thought for Sentence Encoder Pre-training

In this paper, we propose Cross-Thought, a novel approach to pre-trainin...
research
12/16/2021

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

Retrieval-augmented generation models have shown state-of-the-art perfor...
research
11/27/2019

Label Dependent Deep Variational Paraphrase Generation

Generating paraphrases that are lexically similar but semantically diffe...
research
11/20/2019

Co-Attention Hierarchical Network: Generating Coherent Long Distractors for Reading Comprehension

In reading comprehension, generating sentence-level distractors is a sig...
research
11/16/2022

Consecutive Question Generation via Dynamic Multitask Learning

In this paper, we propose the task of consecutive question generation (C...
research
03/30/2023

Self-Refine: Iterative Refinement with Self-Feedback

Like people, LLMs do not always generate the best text for a given gener...

Please sign up or login with your details

Forgot password? Click here to reset