A Hierarchical Latent Structure for Variational Conversation Modeling

04/10/2018
by   Yookoon Park, et al.
0

Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2018

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

Variational autoencoders (VAEs) have shown a promise in data-driven conv...
research
05/24/2023

Dior-CVAE: Diffusion Priors in Variational Dialog Generation

Conditional variational autoencoders (CVAEs) have been used recently for...
research
05/26/2023

NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery

Norms, which are culturally accepted guidelines for behaviours, can be i...
research
05/24/2019

mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation

It has been previously observed that training Variational Recurrent Auto...
research
04/30/2020

APo-VAE: Text Generation in Hyperbolic Space

Natural language often exhibits inherent hierarchical structure ingraine...
research
11/14/2018

Extractive Summary as Discrete Latent Variables

In this paper, we compare various methods to compress a text using a neu...
research
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...

Please sign up or login with your details

Forgot password? Click here to reset