Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining

09/23/2020
by   Ananya B. Sai, et al.
0

There is an increasing focus on model-based dialog evaluation metrics such as ADEM, RUBER, and the more recent BERT-based metrics. These models aim to assign a high score to all relevant responses and a low score to all irrelevant responses. Ideally, such models should be trained using multiple relevant and irrelevant responses for any given context. However, no such data is publicly available, and hence existing models are usually trained using a single relevant response and multiple randomly selected responses from other contexts (random negatives). To allow for better training and robust evaluation of model-based metrics, we introduce the DailyDialog++ dataset, consisting of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context. Using this dataset, we first show that even in the presence of multiple correct references, n-gram based metrics and embedding based metrics do not perform well at separating relevant responses from even random negatives. While model-based metrics perform better than n-gram and embedding based metrics on random negatives, their performance drops substantially when evaluated on adversarial examples. To check if large scale pretraining could help, we propose a new BERT-based evaluation metric called DEB, which is pretrained on 727M Reddit conversations and then finetuned on our dataset. DEB significantly outperforms existing models, showing better correlation with human judgements and better performance on random negatives (88.27 evaluated on adversarial responses, thereby highlighting that even large-scale pretrained evaluation models are not robust to the adversarial examples in our dataset. The dataset and code are publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2019

Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings

Despite advances in open-domain dialogue systems, automatic evaluation o...
research
09/06/2022

Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?

The evaluation of recent embedding-based evaluation metrics for text gen...
research
06/02/2019

Pretraining Methods for Dialog Context Representation Learning

This paper examines various unsupervised pretraining objectives for lear...
research
11/03/2022

Revisiting Grammatical Error Correction Evaluation and Beyond

Pretraining-based (PT-based) automatic evaluation metrics (e.g., BERTSco...
research
04/06/2022

Mix-and-Match: Scalable Dialog Response Retrieval using Gaussian Mixture Embeddings

Embedding-based approaches for dialog response retrieval embed the conte...
research
11/01/2020

SMRT Chatbots: Improving Non-Task-Oriented Dialog with Simulated Multiple Reference Training

Non-task-oriented dialog models suffer from poor quality and non-diverse...
research
01/08/2020

To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers

Transfer learning — transferring learned knowledge — has brought a parad...

Please sign up or login with your details

Forgot password? Click here to reset