On Task-Level Dialogue Composition of Generative Transformer Model

10/09/2020
by   Prasanna Parthasarathi, et al.
0

Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models parameterized by a deep neural network are widely used for next turn response generation in such systems. It is natural for users of the system to want to accomplish multiple tasks within the same conversation, but the ability of generative models to compose multiple tasks is not well studied. In this work, we begin by studying the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models. To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss. The results from our experiments highlight the difficulty of even the sophisticated variant of transformer model in learning to compose multiple tasks from single task dialogues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/15/2016

A Network-based End-to-End Trainable Task-oriented Dialogue System

Teaching machines to accomplish tasks by conversing naturally with human...
research
11/01/2019

DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation

We present a large, tunable neural conversational response generation mo...
research
06/20/2021

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

Predicting the next utterance in dialogue is contingent on encoding of u...
research
11/10/2019

Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training

Generative dialogue models currently suffer from a number of problems wh...
research
06/26/2023

How About Kind of Generating Hedges using End-to-End Neural Models?

Hedging is a strategy for softening the impact of a statement in convers...
research
07/08/2019

Multiple Generative Models Ensemble for Knowledge-Driven Proactive Human-Computer Dialogue Agent

Multiple sequence to sequence models were used to establish an end-to-en...
research
04/27/2020

Augmenting Transformers with KNN-Based Composite Memory for Dialogue

Various machine learning tasks can benefit from access to external infor...

Please sign up or login with your details

Forgot password? Click here to reset