On the Evaluation of Dialogue Systems with Next Utterance Classification

05/18/2016
by   Ryan Lowe, et al.
0

An open challenge in constructing dialogue systems is developing methods for automatically learning dialogue strategies from large amounts of unlabelled data. Recent work has proposed Next-Utterance-Classification (NUC) as a surrogate task for building dialogue systems from text data. In this paper we investigate the performance of humans on this task to validate the relevance of NUC as a method of evaluation. Our results show three main findings: (1) humans are able to correctly classify responses at a rate much better than chance, thus confirming that the task is feasible, (2) human performance levels vary across task domains (we consider 3 datasets) and expertise levels (novice vs experts), thus showing that a range of performance is possible on this type of task, (3) automated dialogue systems built using state-of-the-art machine learning methods have similar performance to the human novices, but worse than the experts, thus confirming the utility of this class of tasks for driving further research in automated dialogue systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2019

Survey on Evaluation Methods for Dialogue Systems

In this paper we survey the methods and concepts developed for the evalu...
research
07/29/2018

Microsoft Dialogue Challenge: Building End-to-End Task-Completion Dialogue Systems

This proposal introduces a Dialogue Challenge for building end-to-end ta...
research
09/15/2023

RADE: Reference-Assisted Dialogue Evaluation for Open-Domain Dialogue

Evaluating open-domain dialogue systems is challenging for reasons such ...
research
02/22/2023

Few-Shot Structured Policy Learning for Multi-Domain and Multi-Task Dialogues

Reinforcement learning has been widely adopted to model dialogue manager...
research
09/29/2017

The First Evaluation of Chinese Human-Computer Dialogue Technology

In this paper, we introduce the first evaluation of Chinese human-comput...
research
09/14/2023

Exploring the Impact of Human Evaluator Group on Chat-Oriented Dialogue Evaluation

Human evaluation has been widely accepted as the standard for evaluating...
research
11/19/2022

Bipartite-play Dialogue Collection for Practical Automatic Evaluation of Dialogue Systems

Automation of dialogue system evaluation is a driving force for the effi...

Please sign up or login with your details

Forgot password? Click here to reset