Polarized-VAE: Proximity Based Disentangled Representation Learning for Text Generation

04/22/2020
by   Vikash Balasubramanian, et al.
0

Learning disentangled representations of real world data is a challenging open problem. Most previous methods have focused on either fully supervised approaches which use attribute labels or unsupervised approaches that manipulate the factorization in the latent space of models such as the variational autoencoder (VAE), by training with task-specific losses. In this work we propose polarized-VAE, a novel approach that disentangles selected attributes in the latent space based on proximity measures reflecting the similarity between data points with respect to these attributes. We apply our method to disentangle the semantics and syntax of a sentence and carry out transfer experiments. Polarized-VAE significantly outperforms the VAE baseline and is competitive with the state-of-the-art approaches, while being more a general framework that is applicable to other attribute disentanglement tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2020

q-VAE for Disentangled Representation Learning and Latent Dynamical Systems

This paper proposes a novel variational autoencoder (VAE) derived from T...
research
06/17/2020

Fine-grained Sentiment Controlled Text Generation

Controlled text generation techniques aim to regulate specific attribute...
research
03/20/2022

Attri-VAE: attribute-based, disentangled and interpretable representations of medical images with variational autoencoders

Deep learning (DL) methods where interpretability is intrinsically consi...
research
07/19/2023

Impact of Disentanglement on Pruning Neural Networks

Deploying deep learning neural networks on edge devices, to accomplish t...
research
12/11/2020

Unsupervised Learning of slow features for Data Efficient Regression

Research in computational neuroscience suggests that the human brain's u...
research
04/11/2019

Variational AutoEncoder For Regression: Application to Brain Aging Analysis

While unsupervised variational autoencoders (VAE) have become a powerful...
research
11/15/2019

Gated Variational AutoEncoders: Incorporating Weak Supervision to Encourage Disentanglement

Variational AutoEncoders (VAEs) provide a means to generate representati...

Please sign up or login with your details

Forgot password? Click here to reset