Assessing Dialogue Systems with Distribution Distances

05/06/2021
by   Jiannan Xiang, et al.
5

An important aspect of developing dialogue systems is how to evaluate and compare the performance of different systems. Existing automatic evaluation metrics are based on turn-level quality evaluation and use average scores for system-level comparison. In this paper, we propose to measure the performance of a dialogue system by computing the distribution-wise distance between its generated conversations and real-world conversations. Specifically, two distribution-wise metrics, FBD and PRD, are developed and evaluated. Experiments on several dialogue corpora show that our proposed metrics correlate better with human judgments than existing metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/21/2021

Modeling Performance in Open-Domain Dialogue with PARADISE

There has recently been an explosion of work on spoken dialogue systems,...
research
09/13/2017

A Review of Evaluation Techniques for Social Dialogue Systems

In contrast with goal-oriented dialogue, social dialogue has no clear me...
research
12/14/2016

Neural Emoji Recommendation in Dialogue Systems

Emoji is an essential component in dialogues which has been broadly util...
research
08/21/2018

Aiming to Know You Better Perhaps Makes Me a More Engaging Dialogue Partner

There have been several attempts to define a plausible motivation for a ...
research
03/18/2022

DEAM: Dialogue Coherence Evaluation using AMR-based Semantic Manipulations

Automatic evaluation metrics are essential for the rapid development of ...
research
09/29/2017

The First Evaluation of Chinese Human-Computer Dialogue Technology

In this paper, we introduce the first evaluation of Chinese human-comput...
research
09/06/2019

ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons

While dialogue remains an important end-goal of natural language researc...

Please sign up or login with your details

Forgot password? Click here to reset