FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

10/25/2022
by   Chen Zhang, et al.
0

Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16 high-quality dialogue-level evaluation benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2022

MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for Evaluating Open-Domain Dialogue

Automatic open-domain dialogue evaluation is a crucial component of dial...
research
08/17/2022

SelF-Eval: Self-supervised Fine-grained Dialogue Evaluation

This paper introduces a novel Self-supervised Fine-grained Dialogue Eval...
research
06/27/2023

C-PMI: Conditional Pointwise Mutual Information for Turn-level Dialogue Evaluation

Existing reference-free turn-level evaluation metrics for chatbots inade...
research
10/08/2020

GRADE: Automatic Graph-Enhanced Coherence Metric for Evaluating Open-Domain Dialogue Systems

Automatically evaluating dialogue coherence is a challenging but high-de...
research
12/18/2022

Don't Forget Your ABC's: Evaluating the State-of-the-Art in Chat-Oriented Dialogue Systems

There has been great recent advancement in human-computer chat. However,...
research
11/01/2020

Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems

Many automatic evaluation metrics have been proposed to score the overal...
research
09/06/2019

ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons

While dialogue remains an important end-goal of natural language researc...

Please sign up or login with your details

Forgot password? Click here to reset