Memory-Augmented Recurrent Networks for Dialogue Coherence

by   David Donahue, et al.

Recent dialogue approaches operate by reading each word in a conversation history, and aggregating accrued dialogue information into a single state. This fixed-size vector is not expandable and must maintain a consistent format over time. Other recent approaches exploit an attention mechanism to extract useful information from past conversational utterances, but this introduces an increased computational complexity. In this work, we explore the use of the Neural Turing Machine (NTM) to provide a more permanent and flexible storage mechanism for maintaining dialogue coherence. Specifically, we introduce two separate dialogue architectures based on this NTM design. The first design features a sequence-to-sequence architecture with two separate NTM modules, one for each participant in the conversation. The second memory architecture incorporates a single NTM module, which stores parallel context information for both speakers. This second design also replaces the sequence-to-sequence architecture with a neural language model, to allow for longer context of the NTM and greater understanding of the dialogue history. We report perplexity performance for both models, and compare them to existing baselines.


page 1

page 2

page 3

page 4


Coherent Dialogue with Attention-based Language Models

We model coherent conversation continuation via RNN-based dialogue model...

A Dual Encoder Sequence to Sequence Model for Open-Domain Dialogue Modeling

Ever since the successful application of sequence to sequence learning f...

Dialogue Transformers

We introduce a dialogue policy based on a transformer architecture, wher...

Evaluator for Emotionally Consistent Chatbots

One challenge for evaluating current sequence- or dialogue-level chatbot...

CFGs-2-NLU: Sequence-to-Sequence Learning for Mapping Utterances to Semantics and Pragmatics

In this paper, we present a novel approach to natural language understan...

A Copy-Augmented Sequence-to-Sequence Architecture Gives Good Performance on Task-Oriented Dialogue

Task-oriented dialogue focuses on conversational agents that participate...

Please sign up or login with your details

Forgot password? Click here to reset