Multi-type Disentanglement without Adversarial Training

12/16/2020
by   Lei Sha, et al.
0

Controlling the style of natural language by disentangling the latent space is an important step towards interpretable machine learning. After the latent space is disentangled, the style of a sentence can be transformed by tuning the style representation without affecting other features of the sentence. Previous works usually use adversarial training to guarantee that disentangled vectors do not affect each other. However, adversarial methods are difficult to train. Especially when there are multiple features (e.g., sentiment, or tense, which we call style types in this paper), each feature requires a separate discriminator for extracting a disentangled style vector corresponding to that feature. In this paper, we propose a unified distribution-controlling method, which provides each specific style value (the value of style types, e.g., positive sentiment, or past tense) with a unique representation. This method contributes a solid theoretical basis to avoid adversarial training in multi-type disentanglement. We also propose multiple loss functions to achieve a style-content disentanglement as well as a disentanglement among multiple style types. In addition, we observe that if two different style types always have some specific style values that occur together in the dataset, they will affect each other when transferring the style values. We call this phenomenon training bias, and we propose a loss function to alleviate such training bias while disentangling multiple types. We conduct experiments on two datasets (Yelp service reviews and Amazon product reviews) to evaluate the style-disentangling effect and the unsupervised style transfer performance on two style types: sentiment and tense. The experimental results show the effectiveness of our model.

READ FULL TEXT
research
11/01/2018

Multiple-Attribute Text Style Transfer

The dominant approach to unsupervised "style transfer" in text is based ...
research
08/13/2018

Language Style Transfer from Sentences with Arbitrary Unknown Styles

Language style transfer is the problem of migrating the content of a sou...
research
08/13/2018

Disentangled Representation Learning for Text Style Transfer

This paper tackles the problem of disentangling the latent variables of ...
research
12/25/2017

Domain Adaptation Meets Disentangled Representation Learning and Style Transfer

In order to solve the unsupervised domain adaptation problem, some metho...
research
04/12/2023

ALADIN-NST: Self-supervised disentangled representation learning of artistic style through Neural Style Transfer

Representation learning aims to discover individual salient features of ...
research
05/06/2021

A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations

Learning disentangled representations of textual data is essential for m...
research
08/14/2019

Dual Adversarial Inference for Text-to-Image Synthesis

Synthesizing images from a given text description involves engaging two ...

Please sign up or login with your details

Forgot password? Click here to reset