DeepAI
Log In Sign Up

Designing Precise and Robust Dialogue Response Evaluators

04/10/2020
by   Tianyu Zhao, et al.
0

Automatic dialogue response evaluator has been proposed as an alternative to automated metrics and human evaluation. However, existing automatic evaluators achieve only moderate correlation with human judgement and they are not robust. In this work, we propose to build a reference-free evaluator and exploit the power of semi-supervised training and pretrained (masked) language models. Experimental results demonstrate that the proposed evaluator achieves a strong correlation (> 0.6) with human judgement and generalizes robustly to diverse responses and corpora. We open-source the code and data in https://github.com/ZHAOTING/dialog-processing.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/15/2020

Multi-Referenced Training for Dialogue Response Generation

In open-domain dialogue response generation, a dialogue context can be c...
05/01/2020

Learning an Unreferenced Metric for Online Dialogue Evaluation

Evaluating the quality of a dialogue interaction between two agents is a...
08/23/2017

Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses

Automatically evaluating the quality of dialogue responses for unstructu...
10/09/2022

Controllable Dialogue Simulation with In-Context Learning

Building dialogue systems requires a large corpus of annotated dialogues...
11/17/2021

MEDCOD: A Medically-Accurate, Emotive, Diverse, and Controllable Dialog System

We present MEDCOD, a Medically-Accurate, Emotive, Diverse, and Controlla...
12/04/2022

Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation

Large pretrained language models can easily produce toxic or biased cont...
10/20/2022

Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

We find a surprising connection between multitask learning and robustnes...