GALAXY: A Generative Pre-trained Model for Task-Oriented Dialog with Semi-Supervised Learning and Explicit Policy Injection

11/29/2021
by   Wanwei He, et al.
0

Pre-trained models have proved to be powerful in enhancing task-oriented dialog systems. However, current pre-training methods mainly focus on enhancing dialog understanding and generation tasks while neglecting the exploitation of dialog policy. In this paper, we propose GALAXY, a novel pre-trained dialog model that explicitly learns dialog policy from limited labeled dialogs and large-scale unlabeled dialog corpora via semi-supervised learning. Specifically, we introduce a dialog act prediction task for policy optimization during pre-training and employ a consistency regularization term to refine the learned representation with the help of unlabeled dialogs. We also implement a gating mechanism to weigh suitable unlabeled dialog samples. Empirical results show that GALAXY substantially improves the performance of task-oriented dialog systems, and achieves new state-of-the-art results on benchmark datasets: In-Car, MultiWOZ2.0 and MultiWOZ2.1, improving their end-to-end combined scores by 2.5, 5.3 and 5.5 points, respectively. We also show that GALAXY has a stronger few-shot ability than existing models under various low-resource settings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2022

SPACE-3: Unified Dialog Model Pre-training for Task-Oriented Dialog Understanding and Generation

Recently, pre-training methods have shown remarkable success in task-ori...
research
09/14/2022

SPACE-2: Tree-Structured Semi-Supervised Contrastive Pre-training for Task-Oriented Dialog Understanding

Pre-training methods with contrastive learning objectives have shown rem...
research
05/11/2020

SOLOIST: Few-shot Task-Oriented Dialog with A Single Pre-trained Auto-regressive Model

This paper presents a new method SOLOIST, which uses transfer learning t...
research
05/25/2022

The Dialog Must Go On: Improving Visual Dialog via Generative Self-Training

Visual dialog (VisDial) is a task of answering a sequence of questions g...
research
08/28/2021

Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

As the labeling cost for different modules in task-oriented dialog (ToD)...
research
10/17/2022

Semi-Supervised Knowledge-Grounded Pre-training for Task-Oriented Dialog Systems

Recent advances in neural approaches greatly improve task-oriented dialo...
research
09/20/2023

UniPCM: Universal Pre-trained Conversation Model with Task-aware Automatic Prompt

Recent research has shown that multi-task pre-training greatly improves ...

Please sign up or login with your details

Forgot password? Click here to reset