Approximating Interactive Human Evaluation with Self-Play for
Open-Domain Dialog Systems
Building an open-domain conversational agent is a challenging problem.
Current evaluation methods, mostly post-hoc judgments of single-turn
evaluation, do not capture conversation quality in a realistic interactive
context. In this paper, we investigate interactive human evaluation and provide
evidence for its necessity; we then introduce a novel, model-agnostic, and
dataset-agnostic method to approximate it. In particular, we propose a
self-play scenario where the dialog system talks to itself and we calculate a
combination of proxies such as sentiment and semantic coherence on the
conversation trajectory. We show that this metric is capable of capturing the
human-rated quality of a dialog model better than any automated metric known
to-date, achieving a significant Pearson correlation (r>.7, p<.05). To
investigate the strengths of this novel metric and interactive evaluation in
comparison to state-of-the-art metrics and one-turn evaluation, we perform
extended experiments with a set of models, including several that make novel
improvements to recent hierarchical dialog generation architectures through
sentiment and semantic knowledge distillation on the utterance level. Finally,
we open-source the interactive evaluation platform we built and the dataset we
collected to allow researchers to efficiently deploy and evaluate generative
dialog models.
READ FULL TEXT