DeepAI
Log In Sign Up

Assessing Dialogue Systems with Distribution Distances

05/06/2021
by   Jiannan Xiang, et al.
5

An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems. Existing automatic evaluation metrics are based on turn-level quality evaluation and use average scores for system-level comparison. In this paper, we propose to measure the performance of a dialogue system by computing the distribution-wise distance between its generated conversations and real-world conversations. Specifically, two distribution-wise metrics, FBD and PRD, are developed and evaluated. Experiments on several dialogue corpora show that our proposed metrics correlate better with human judgments than existing metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/21/2021

Modeling Performance in Open-Domain Dialogue with PARADISE

There has recently been an explosion of work on spoken dialogue systems,...
09/13/2017

A Review of Evaluation Techniques for Social Dialogue Systems

In contrast with goal-oriented dialogue, social dialogue has no clear me...
12/14/2021

MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

Chatbots are designed to carry out human-like conversations across diffe...
12/14/2016

Neural Emoji Recommendation in Dialogue Systems

Emoji is an essential component in dialogues which has been broadly util...
08/21/2018

Aiming to Know You Better Perhaps Makes Me a More Engaging Dialogue Partner

There have been several attempts to define a plausible motivation for a ...
03/18/2022

DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations

Automatic evaluation metrics are essential for the rapid development of ...
09/29/2017

The First Evaluation of Chinese Human-Computer Dialogue Technology

In this paper, we introduce the first evaluation of Chinese human-comput...