The Interplay of Task Success and Dialogue Quality: An in-depth Evaluation in Task-Oriented Visual Dialogues

by   Alberto Testoni, et al.

When training a model on referential dialogue guessing games, the best model is usually chosen based on its task success. We show that in the popular end-to-end approach, this choice prevents the model from learning to generate linguistically richer dialogues, since the acquisition of language proficiency takes longer than learning the guessing task. By comparing models playing different games (GuessWhat, GuessWhich, and Mutual Friends), we show that this discrepancy is model- and task-agnostic. We investigate whether and when better language quality could lead to higher task success. We show that in GuessWhat, models could increase their accuracy if they learn to ground, encode, and decode also words that do not occur frequently in the training set.


End-to-End Optimization of Task-Oriented Dialogue Model with Deep Reinforcement Learning

In this paper, we present a neural network based task-oriented dialogue ...

Effective Data Augmentation Approaches to End-to-End Task-Oriented Dialogue

The training of task-oriented dialogue systems is often confronted with ...

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Pre-trained language models have been recently shown to benefit task-ori...

Self-Attentional Models Application in Task-Oriented Dialogue Generation Systems

Self-attentional models are a new paradigm for sequence modelling tasks ...

OPAL: Ontology-Aware Pretrained Language Model for End-to-End Task-Oriented Dialogue

This paper presents an ontology-aware pretrained language model (OPAL) f...

Grounded Agreement Games: Emphasizing Conversational Grounding in Visual Dialogue Settings

Where early work on dialogue in Computational Linguistics put much empha...

Establishing linguistic conventions in task-oriented primeval dialogue

In this paper, we claim that language is likely to have emerged as a mec...