How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics

08/24/2020
by   Prasanna Parthasarathi, et al.
11

Though generative dialogue modeling is widely seen as a language modeling task, the task demands an agent to have a complex natural language understanding of its input text to carry a meaningful interaction with an user. The automatic metrics used evaluate the quality of the generated text as a proxy to the holistic interaction of the agent. Such metrics were earlier shown to not correlate with the human judgement. In this work, we observe that human evaluation of dialogue agents can be inconclusive due to the lack of sufficient information for appropriate evaluation. The automatic metrics are deterministic yet shallow and human evaluation can be relevant yet inconclusive. To bridge this gap in evaluation, we propose designing a set of probing tasks to evaluate dialogue models. The hand-crafted tasks are aimed at quantitatively evaluating a generative dialogue model's understanding beyond the token-level evaluation on the generated text. The probing tasks are deterministic like automatic metrics and requires human judgement in their designing; benefiting from the best of both worlds. With experiments on probe tasks we observe that, unlike RNN based architectures, transformer model may not be learning to comprehend the input text despite its generated text having higher overlap with the target text.

READ FULL TEXT

page 9

page 10

page 16

page 18

page 19

page 20

page 21

page 23

research
06/20/2021

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

Predicting the next utterance in dialogue is contingent on encoding of u...
research
10/06/2020

GRUEN for Evaluating Linguistic Quality of Generated Text

Automatic evaluation metrics are indispensable for evaluating generated ...
research
08/23/2017

Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses

Automatically evaluating the quality of dialogue responses for unstructu...
research
09/13/2017

A Review of Evaluation Techniques for Social Dialogue Systems

In contrast with goal-oriented dialogue, social dialogue has no clear me...
research
04/06/2020

PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative Dialogue Systems

Open-domain generative dialogue systems have attracted considerable atte...
research
02/20/2021

Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy Evaluation Approach

Reliable automatic evaluation of dialogue systems under an interactive e...
research
12/15/2021

Dynamic Human Evaluation for Relative Model Comparisons

Collecting human judgements is currently the most reliable evaluation me...

Please sign up or login with your details

Forgot password? Click here to reset