DeepAI AI Chat
Log In Sign Up

A Hierarchical Latent Structure for Variational Conversation Modeling

04/10/2018
by   Yookoon Park, et al.
Seoul National University
0

Variational autoencoders (VAE) combined with hierarchical RNNs have emerged as a powerful framework for conversation modeling. However, they suffer from the notorious degeneration problem, where the decoders learn to ignore latent variables and reduce to vanilla RNNs. We empirically show that this degeneracy occurs mostly due to two reasons. First, the expressive power of hierarchical RNN decoders is often high enough to model the data using only its decoding distributions without relying on the latent variables. Second, the conditional VAE structure whose generation process is conditioned on a context, makes the range of training targets very sparse; that is, the RNN decoders can easily overfit to the training data ignoring the latent variables. To solve the degeneration problem, we propose a novel model named Variational Hierarchical Conversation RNNs (VHCR), involving two key ideas of (1) using a hierarchical structure of latent variables, and (2) exploiting an utterance drop regularization. With evaluations on two datasets of Cornell Movie Dialog and Ubuntu Dialog Corpus, we show that our VHCR successfully utilizes latent variables and outperforms state-of-the-art models for conversation generation. Moreover, it can perform several new utterance control tasks, thanks to its hierarchical latent structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/31/2018

DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder

Variational autoencoders (VAEs) have shown a promise in data-driven conv...
09/17/2020

Hierarchical Multi-Grained Generative Model for Expressive Speech Synthesis

This paper proposes a hierarchical generative model with a multi-grained...
07/14/2020

Relaxed-Responsibility Hierarchical Discrete VAEs

Successfully training Variational Autoencoders (VAEs) with a hierarchy o...
05/24/2019

mu-Forcing: Training Variational Recurrent Autoencoders for Text Generation

It has been previously observed that training Variational Recurrent Auto...
11/14/2018

Extractive Summary as Discrete Latent Variables

In this paper, we compare various methods to compress a text using a neu...
03/07/2022

Hierarchical Sketch Induction for Paraphrase Generation

We propose a generative model of paraphrase generation, that encourages ...
05/12/2020

Jigsaw-VAE: Towards Balancing Features in Variational Autoencoders

The latent variables learned by VAEs have seen considerable interest as ...

Code Repositories

A-Hierarchical-Latent-Structure-for-Variational-Conversation-Modeling

PyTorch 0.4 Implementation of "A Hierarchical Latent Structure for Variational Conversation Modeling" accepted in NAACL 2018 (Oral).


view repo