A Hybrid Convolutional Variational Autoencoder for Text Generation

02/08/2017
by   Stanislau Semeniuta, et al.
0

In this paper we explore the effect of architectural choices on learning a Variational Autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid some of the major difficulties posed by training VAE models on textual data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2020

Improving Variational Autoencoder for Text Modelling with Timestep-Wise Regularisation

The Variational Autoencoder (VAE) is a popular and powerful model applie...
research
03/17/2019

Topic-Guided Variational Autoencoders for Text Generation

We propose a topic-guided variational autoencoder (TGVAE) model for text...
research
09/06/2017

Symmetric Variational Autoencoder and Connections to Adversarial Learning

A new form of the variational autoencoder (VAE) is proposed, based on th...
research
01/01/2023

eVAE: Evolutionary Variational Autoencoder

The surrogate loss of variational autoencoders (VAEs) poses various chal...
research
12/07/2020

Autoencoding Variational Autoencoder

Does a Variational AutoEncoder (VAE) consistently encode typical samples...
research
02/19/2018

Degeneration in VAE: in the Light of Fisher Information Loss

Variational Autoencoder (VAE) is one of the most popular generative mode...
research
06/15/2020

Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

Generating inferential texts about an event in different perspectives re...

Please sign up or login with your details

Forgot password? Click here to reset