MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

12/14/2021
by   Chen Zhang, et al.
6

Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations. To measure the quality of such conversational agents, a dialogue evaluator is expected to conduct assessment across domains as well. However, most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are not designed for multi-domain evaluation. We are motivated to design a general and robust framework, MDD-Eval, to address the problem. Specifically, we first train a teacher evaluator with human-annotated data to acquire a rating skill to tell good dialogue responses from bad ones in a particular domain and then, adopt a self-training strategy to train a new evaluator with teacher-annotated multi-domain data, that helps the new evaluator to generalize across multiple domains. MDD-Eval is extensively assessed on six dialogue evaluation benchmarks. Empirical results show that the MDD-Eval framework achieves a strong performance with an absolute improvement of 7 ADMs in terms of mean Spearman correlation scores across all the evaluation benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2022

PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment

Chatbots are expected to be knowledgeable across multiple domains, e.g. ...
research
08/31/2023

Simple LLM Prompting is State-of-the-Art for Robust and Multilingual Dialogue Evaluation

Despite significant research effort in the development of automatic dial...
research
08/17/2022

SelF-Eval: Self-supervised Fine-grained Dialogue Evaluation

This paper introduces a novel Self-supervised Fine-grained Dialogue Eval...
research
05/22/2023

Towards Dialogue Systems with Agency in Human-AI Collaboration Tasks

Agency, the capacity to proactively shape events, is crucial to how huma...
research
06/03/2019

Know More about Each Other: Evolving Dialogue Strategy via Compound Assessment

In this paper, a novel Generation-Evaluation framework is developed for ...
research
12/20/2022

Contrastive Learning Reduces Hallucination in Conversations

Pre-trained language models (LMs) store knowledge in their parameters an...
research
08/13/2021

Low-Resource Adaptation of Open-Domain Generative Chatbots

Recent work building open-domain chatbots has demonstrated that increasi...

Please sign up or login with your details

Forgot password? Click here to reset