Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

by   Prasanna Parthasarathi, et al.

Predicting the next utterance in dialogue is contingent on encoding of users' input text to generate appropriate and relevant response in data-driven approaches. Although the semantic and syntactic quality of the language generated is evaluated, more often than not, the encoded representation of input is not evaluated. As the representation of the encoder is essential for predicting the appropriate response, evaluation of encoder representation is a challenging yet important problem. In this work, we showcase evaluating the text generated through human or automatic metrics is not sufficient to appropriately evaluate soundness of the language understanding of dialogue models and, to that end, propose a set of probe tasks to evaluate encoder representation of different language encoders commonly used in dialogue models. From experiments, we observe that some of the probe tasks are easier and some are harder for even sophisticated model architectures to learn. And, through experiments we observe that RNN based architectures have lower performance on automatic metrics on text generation than transformer model but perform better than the transformer model on the probe tasks indicating that RNNs might preserve task information better than the Transformers.


page 3

page 7

page 8

page 12


How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics

Though generative dialogue modeling is widely seen as a language modelin...

On Task-Level Dialogue Composition of Generative Transformer Model

Task-oriented dialogue systems help users accomplish tasks such as booki...

Context Matters in Semantically Controlled Language Generation for Task-oriented Dialogue Systems

This work combines information about the dialogue history encoded by pre...

Adversarial Evaluation of Dialogue Models

The recent application of RNN encoder-decoder models has resulted in sub...

Improving Context Modelling in Multimodal Dialogue Generation

In this work, we investigate the task of textual response generation in ...

An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation

Generating semantically coherent responses is still a major challenge in...

Augmenting Transformers with KNN-Based Composite Memory for Dialogue

Various machine learning tasks can benefit from access to external infor...