Retrospective and Prospective Mixture-of-Generators for Task-oriented Dialogue Response Generation

11/19/2019
by   Jiahuan Pei, et al.
27

Dialogue response generation (DRG) is a critical component of task-oriented dialogue systems (TDSs). Its purpose is to generate proper natural language responses given some context, e.g., historical utterances, system states, etc. State-of-the-art work focuses on how to better tackle DRG in an end-to-end way. Typically, such studies assume that each token is drawn from a single distribution over the output vocabulary, which may not always be optimal. Responses vary greatly with different intents, e.g., domains, system actions. We propose a novel mixture-of-generators network (MoGNet) for DRG, where we assume that each token of a response is drawn from a mixture of distributions. MoGNet consists of a chair generator and several expert generators. Each expert is specialized for DRG w.r.t. a particular intent. The chair coordinates multiple experts and combines the output they have generated to produce more appropriate responses. We propose two strategies to help the chair make better decisions, namely, a retrospective mixture-of-generators (RMoG) and prospective mixture-of-generators (PMoG). The former only considers the historical expert-generated responses until the current time step while the latter also considers possible expert-generated responses in the future by encouraging exploration. In order to differentiate experts, we also devise a global-and-local (GL) learning scheme that forces each expert to be specialized towards a particular intent using a local loss and trains the chair and all experts to coordinate using a global loss. We carry out extensive experiments on the MultiWOZ benchmark dataset. MoGNet significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, demonstrating its effectiveness for DRG.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2019

A Modular Task-oriented Dialogue System Using a Neural Mixture-of-Experts

End-to-end Task-oriented Dialogue Systems (TDSs) have attracted a lot of...
research
12/29/2020

Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines

End-to-end neural networks have achieved promising performances in natur...
research
08/07/2020

Diversifying Task-oriented Dialogue Response Generation with Prototype Guided Paraphrasing

Existing methods for Dialogue Response Generation (DRG) in Task-oriented...
research
04/05/2019

Generate, Filter, and Rank: Grammaticality Classification for Production-Ready NLG Systems

Neural approaches to Natural Language Generation (NLG) have been promisi...
research
08/21/2019

MoEL: Mixture of Empathetic Listeners

Previous research on empathetic dialogue systems has mostly focused on g...
research
05/31/2022

A Mixture-of-Expert Approach to RL-based Dialogue Management

Despite recent advancements in language models (LMs), their application ...
research
11/20/2018

Another Diversity-Promoting Objective Function for Neural Dialogue Generation

Although generation-based dialogue systems have been widely researched, ...

Please sign up or login with your details

Forgot password? Click here to reset