Improving Disentangled Text Representation Learning with Information-Theoretic Guidance

06/01/2020
by   Pengyu Cheng, et al.
5

Learning disentangled representations of natural language is essential for many NLP tasks, e.g., conditional text generation, style transfer, personalized dialogue systems, etc. Similar problems have been studied extensively for other forms of data, such as images and videos. However, the discrete nature of natural language makes the disentangling of textual representations more challenging (e.g., the manipulation over the data space cannot be easily achieved). Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics. A new mutual information upper bound is derived and leveraged to measure dependence between style and content. By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces. Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/06/2021

A Novel Estimator of Mutual Information for Learning to Disentangle Textual Representations

Learning disentangled representations of textual data is essential for m...
research
08/13/2018

Disentangled Representation Learning for Text Style Transfer

This paper tackles the problem of disentangling the latent variables of ...
research
12/22/2017

Disentangled Representations for Manipulation of Sentiment in Text

The ability to change arbitrary aspects of a text while leaving the core...
research
06/02/2022

Disentangled Generation Network for Enlarged License Plate Recognition and A Unified Dataset

License plate recognition plays a critical role in many practical applic...
research
09/15/2021

Disentangling Generative Factors in Natural Language with Discrete Variational Autoencoders

The ability of learning disentangled representations represents a major ...
research
05/21/2018

Invariant Representations from Adversarially Censored Autoencoders

We combine conditional variational autoencoders (VAE) with adversarial c...
research
04/12/2023

ALADIN-NST: Self-supervised disentangled representation learning of artistic style through Neural Style Transfer

Representation learning aims to discover individual salient features of ...

Please sign up or login with your details

Forgot password? Click here to reset