Revision in Continuous Space: Fine-Grained Control of Text Style Transfer

05/29/2019
by   Dayiheng Liu, et al.
0

Typical methods for unsupervised text style transfer often rely on two key ingredients: 1) seeking for the disentanglement of the content and the attributes, and 2) troublesome adversarial learning. In this paper, we show that neither of these components is indispensable. We propose a new framework without them and instead consists of three key components: a variational auto-encoder (VAE), some attribute predictors (one for each attribute), and a content predictor. The VAE and the two types of predictors enable us to perform gradient-based optimization in the continuous space, which is mapped from sentences in a discrete space, to find the representation of a target sentence with the desired attributes and preserved content. Moreover, the proposed method can, for the first time, simultaneously manipulate multiple fine-grained attributes, such as sentence length and the presence of specific words, in synergy when performing text style transfer tasks. Extensive experimental studies on three popular text style transfer tasks show that the proposed method significantly outperforms five state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

Syntax Matters! Syntax-Controlled in Text Style Transfer

Existing text style transfer (TST) methods rely on style classifiers to ...
research
05/04/2022

Towards Robust and Semantically Organised Latent Representations for Unsupervised Text Style Transfer

Recent studies show that auto-encoder based approaches successfully perf...
research
05/10/2023

Adapter-TST: A Parameter Efficient Method for Multiple-Attribute Text Style Transfer

Adapting a large language model for multiple-attribute text style transf...
research
11/01/2018

Multiple-Attribute Text Style Transfer

The dominant approach to unsupervised "style transfer" in text is based ...
research
05/06/2021

A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations

Learning disentangled representations of textual data is essential for m...
research
02/01/2021

GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained Text Style Transfer

Non-parallel text style transfer has attracted increasing research inter...
research
05/10/2021

MuseMorphose: Full-Song and Fine-Grained Music Style Transfer with One Transformer VAE

Transformers and variational autoencoders (VAE) have been extensively em...

Please sign up or login with your details

Forgot password? Click here to reset