Improving Variational Encoder-Decoders in Dialogue Generation

02/06/2018
by   Xiaoyu Shen, et al.
0

Variational encoder-decoders (VEDs) have shown promising results in dialogue generation. However, the latent variable distributions are usually approximated by a much simpler model than the powerful RNN structure used for encoding and decoding, yielding the KL-vanishing problem and inconsistent training objective. In this paper, we separate the training step into two phases: The first phase learns to autoencode discrete texts into continuous embeddings, from which the second phase learns to generalize latent representations by reconstructing the encoded embedding. In this case, latent variables are sampled by transforming Gaussian noise through multi-layer perceptrons and are trained with a separate VED model, which has the potential of realizing a much more flexible distribution. We compare our model with current popular models and the experiment demonstrates substantial improvement in both metric-based and human evaluations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/02/2022

Towards Diverse, Relevant and Coherent Open-Domain Dialogue Generation via Hybrid Latent Variables

Conditional variational models, using either continuous or discrete late...
research
07/13/2022

Fuse It More Deeply! A Variational Transformer with Layer-Wise Latent Variable Inference for Text Generation

The past several years have witnessed Variational Auto-Encoder's superio...
research
05/31/2018

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

Variational autoencoders (VAEs) have shown a promise in data-driven conv...
research
05/19/2016

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

Sequential data often possesses a hierarchical structure with complex de...
research
12/01/2016

Piecewise Latent Variables for Neural Variational Text Processing

Advances in neural variational inference have facilitated the learning o...
research
01/07/2020

Paraphrase Generation with Latent Bag of Words

Paraphrase generation is a longstanding important problem in natural lan...
research
12/01/2022

Modeling Complex Dialogue Mappings via Sentence Semantic Segmentation Guided Conditional Variational Auto-Encoder

Complex dialogue mappings (CDM), including one-to-many and many-to-one m...

Please sign up or login with your details

Forgot password? Click here to reset