Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation

06/02/2016
by   Iulian Vlad Serban, et al.
0

We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2016

Generative Deep Neural Networks for Dialogue: A Short Review

Researchers have recently started investigating deep neural networks for...
research
03/25/2016

How NOT To Evaluate Your Dialogue System: An Empirical Study of Unsupervised Evaluation Metrics for Dialogue Response Generation

We investigate evaluation metrics for dialogue response generation syste...
research
06/06/2023

Injecting knowledge into language generation: a case study in auto-charting after-visit care instructions from medical dialogue

Factual correctness is often the limiting factor in practical applicatio...
research
09/27/2018

NEXUS Network: Connecting the Preceding and the Following in Dialogue Generation

Sequence-to-Sequence (seq2seq) models have become overwhelmingly popular...
research
08/07/2015

Stochastic Language Generation in Dialogue using Recurrent Neural Networks with Convolutional Sentence Reranking

The natural language generation (NLG) component of a spoken dialogue sys...
research
01/12/2020

Stochastic Natural Language Generation Using Dependency Information

This article presents a stochastic corpus-based model for generating nat...
research
04/07/2016

Neural Headline Generation with Sentence-wise Optimization

Recently, neural models have been proposed for headline generation by le...

Please sign up or login with your details

Forgot password? Click here to reset