A Comprehensive Assessment of Dialog Evaluation Metrics

06/07/2021
by   Yi-Ting Yeh, et al.
0

Automatic evaluation metrics are a crucial component of dialog systems research. Standard language evaluation metrics are known to be ineffective for evaluating dialog. As such, recent research has proposed a number of novel, dialog-specific metrics that correlate better with human judgements. Due to the fast pace of research, many of these metrics have been assessed on different datasets and there has as yet been no time for a systematic comparison between them. To this end, this paper provides a comprehensive assessment of recently proposed dialog evaluation metrics on a number of datasets. In this paper, 17 different automatic evaluation metrics are evaluated on 10 different datasets. Furthermore, the metrics are assessed in different settings, to better qualify their respective strengths and weaknesses. Metrics are assessed (1) on both the turn level and the dialog level, (2) for different dialog lengths, (3) for different dialog qualities (e.g., coherence, engaging), (4) for different types of response generation models (i.e., generative, retrieval, simple models and state-of-the-art models), (5) taking into account the similarity of different metrics and (6) exploring combinations of different metrics. This comprehensive assessment offers several takeaways pertaining to dialog evaluation metrics in general. It also suggests how to best assess evaluation metrics and indicates promising directions for future work.

READ FULL TEXT
research
05/01/2020

USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation

The lack of meaningful automatic evaluation metrics for dialog has imped...
research
10/05/2021

Investigating the Impact of Pre-trained Language Models on Dialog Evaluation

Recently, there is a surge of interest in applying pre-trained language ...
research
06/04/2021

Improving Computer Generated Dialog with Auxiliary Loss Functions and Custom Evaluation Metrics

Although people have the ability to engage in vapid dialogue without eff...
research
10/16/2017

Which is better? A Modularized Evaluation for Topic Popularity Prediction

Topic popularity prediction in social networks has drawn much attention ...
research
04/11/2023

Approximating Human Evaluation of Social Chatbots with Prompting

Once powerful conversational models have become available for a wide aud...
research
05/21/2020

Beyond User Self-Reported Likert Scale Ratings: A Comparison Model for Automatic Dialog Evaluation

Open Domain dialog system evaluation is one of the most important challe...

Please sign up or login with your details

Forgot password? Click here to reset