Factorising Meaning and Form for Intent-Preserving Paraphrasing

05/31/2021
by   Tom Hosking, et al.
0

We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form. Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form. We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces. We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time. Crucially, our method does not require access to an external source of target exemplars. Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods.

READ FULL TEXT
research
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...
research
11/22/2019

A Discrete CVAE for Response Generation on Short-Text Conversation

Neural conversation models such as encoder-decoder models are easy to ge...
research
09/21/2020

Composed Variational Natural Language Generation for Few-shot Intents

In this paper, we focus on generating training examples for few-shot int...
research
06/22/2022

Towards Unsupervised Content Disentanglement in Sentence Representations via Syntactic Roles

Linking neural representations to linguistic factors is crucial in order...
research
03/22/2022

Upmixing via style transfer: a variational autoencoder for disentangling spatial images and musical content

In the stereo-to-multichannel upmixing problem for music, one of the mai...
research
06/05/2021

Principal Bit Analysis: Autoencoding with Schur-Concave Loss

We consider a linear autoencoder in which the latent variables are quant...
research
11/27/2022

Unsupervised Opinion Summarisation in the Wasserstein Space

Opinion summarisation synthesises opinions expressed in a group of docum...

Please sign up or login with your details

Forgot password? Click here to reset