Log In Sign Up

Generate, Evaluate, and Select: A Dialogue System with a Response Evaluator for Diversity-Aware Response Generation

by   Ryoma Sakaeda, et al.

We aim to overcome the lack of diversity in responses of current dialogue systems and to develop a dialogue system that is engaging as a conversational partner. We propose a generator-evaluator model that evaluates multiple responses generated by a response generator and selects the best response by an evaluator. By generating multiple responses, we obtain diverse responses. We conduct human evaluations to compare the output of the proposed system with that of a baseline system. The results of the human evaluations showed that the proposed system's responses were often judged to be better than the baseline system's, and indicated the effectiveness of the proposed method.


page 1

page 2

page 3

page 4


Measuring and Improving Semantic Diversity of Dialogue Generation

Response diversity has become an important criterion for evaluating the ...

DAL: Dual Adversarial Learning for Dialogue Generation

In open-domain dialogue systems, generative approaches have attracted mu...

Diversifying Task-oriented Dialogue Response Generation with Prototype Guided Paraphrasing

Existing methods for Dialogue Response Generation (DRG) in Task-oriented...

Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization

Responses generated by neural conversational models tend to lack informa...

Weakly-Supervised Neural Response Selection from an Ensemble of Task-Specialised Dialogue Agents

Dialogue engines that incorporate different types of agents to converse ...

Conversational Response Re-ranking Based on Event Causality and Role Factored Tensor Event Embedding

We propose a novel method for selecting coherent and diverse responses f...

EmpHi: Generating Empathetic Responses with Human-like Intents

In empathetic conversations, humans express their empathy to others with...