DeepAI AI Chat
Log In Sign Up

Disentangled Representation Learning for Text Style Transfer

by   Vineet John, et al.
University of Waterloo

This paper tackles the problem of disentangling the latent variables of style and content in language models. We propose a simple, yet effective approach, which incorporates auxiliary objectives: a multi-task classification objective, and dual adversarial objectives for label prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space, using this approach. This disentangled latent representation learning method is applied to attribute (e.g. style) transfer on non-parallel corpora. We achieve similar content preservation scores compared to previous state-of-the-art approaches, and significantly better style-transfer strength scores. Our code is made publicly available for replicability and extension purposes.


Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation

Disentangling the content and style in the latent space is prevalent in ...

Domain-Specific Mappings for Generative Adversarial Style Transfer

Style transfer generates an image whose content comes from one image and...

Improving Disentangled Text Representation Learning with Information-Theoretic Guidance

Learning disentangled representations of natural language is essential f...

Multi-type Disentanglement without Adversarial Training

Controlling the style of natural language by disentangling the latent sp...

Scalable Font Reconstruction with Dual Latent Manifolds

We propose a deep generative model that performs typography analysis and...

Mitigating Negative Style Transfer in Hybrid Dialogue System

As the functionality of dialogue systems evolves, hybrid dialogue system...

Latent Optimization for Non-adversarial Representation Disentanglement

Disentanglement between pose and content is a key task for artificial in...