Text Modeling with Syntax-Aware Variational Autoencoders

08/27/2019
by   Yijun Xiao, et al.
0

Syntactic information contains structures and rules about how text sentences are arranged. Incorporating syntax into text modeling methods can potentially benefit both representation learning and generation. Variational autoencoders (VAEs) are deep generative models that provide a probabilistic way to describe observations in the latent space. When applied to text data, the latent representations are often unstructured. We propose syntax-aware variational autoencoders (SAVAEs) that dedicate a subspace in the latent dimensions dubbed syntactic latent to represent syntactic structures of sentences. SAVAEs are trained to infer syntactic latent from either text inputs or parsed syntax results as well as reconstruct original text with inferred latent variables. Experiments show that SAVAEs are able to achieve lower reconstruction loss on four different data sets. Furthermore, they are capable of generating examples with modified target syntax.

READ FULL TEXT
research
06/05/2019

Syntax-Infused Variational Autoencoder for Text Generation

We present a syntax-infused variational autoencoder (SIVAE), that integr...
research
07/06/2019

Generating Sentences from Disentangled Syntactic and Semantic Spaces

Variational auto-encoders (VAEs) are widely used in natural language gen...
research
05/29/2019

Latent Space Secrets of Denoising Text-Autoencoders

While neural language models have recently demonstrated impressive perfo...
research
05/04/2023

Interpretable Sentence Representation with Variational Autoencoders and Attention

In this thesis, we develop methods to enhance the interpretability of re...
research
02/24/2018

Syntax-Directed Variational Autoencoder for Structured Data

Deep generative models have been enjoying success in modeling continuous...
research
02/09/2023

Trading Information between Latents in Hierarchical Variational Autoencoders

Variational Autoencoders (VAEs) were originally motivated (Kingma We...
research
05/12/2022

Exploiting Inductive Bias in Transformers for Unsupervised Disentanglement of Syntax and Semantics with VAEs

We propose a generative model for text generation, which exhibits disent...

Please sign up or login with your details

Forgot password? Click here to reset