GRADE: Automatic Graph-Enhanced Coherence Metric for Evaluating Open-Domain Dialogue Systems

10/08/2020
by   Lishan Huang, et al.
0

Automatically evaluating dialogue coherence is a challenging but high-demand ability for developing high-quality open-domain dialogue systems. However, current evaluation metrics consider only surface features or utterance-level semantics, without explicitly considering the fine-grained topic transition dynamics of dialogue flows. Here, we first consider that the graph structure constituted with topics in a dialogue can accurately depict the underlying communication logic, which is a more natural way to produce persuasive metrics. Capitalized on the topic-level dialogue graph, we propose a new evaluation metric GRADE, which stands for Graph-enhanced Representations for Automatic Dialogue Evaluation. Specifically, GRADE incorporates both coarse-grained utterance-level contextualized representations and fine-grained topic-level graph representations to evaluate dialogue coherence. The graph representations are obtained by reasoning over topic-level dialogue graphs enhanced with the evidence from a commonsense graph, including k-hop neighboring representations and hop-attention weights. Experimental results show that our GRADE significantly outperforms other state-of-the-art metrics on measuring diverse dialogue models in terms of the Pearson and Spearman correlations with human judgements. Besides, we release a new large-scale human evaluation benchmark to facilitate future research on automatic metrics.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2021

DynaEval: Unifying Turn and Dialogue Level Evaluation

A dialogue is essentially a multi-turn interaction among interlocutors. ...
research
04/06/2019

Evaluating Coherence in Dialogue Systems using Entailment

Evaluating open-domain dialogue systems is difficult due to the diversit...
research
02/22/2023

Topic-switch adapted Japanese Dialogue System based on PLATO-2

Large-scale open-domain dialogue systems such as PLATO-2 have achieved s...
research
06/01/2021

Towards Quantifiable Dialogue Coherence Evaluation

Automatic dialogue coherence evaluation has attracted increasing attenti...
research
08/22/2019

A Neural Model for Dialogue Coherence Assessment

Dialogue quality assessment is crucial for evaluating dialogue agents. A...
research
10/22/2022

EnDex: Evaluation of Dialogue Engagingness at Scale

We propose EnDex, the first human-reaction based model to evaluate dialo...
research
10/25/2022

FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

Recent model-based reference-free metrics for open-domain dialogue evalu...

Please sign up or login with your details

Forgot password? Click here to reset