Improving Limited Labeled Dialogue State Tracking with Self-Supervision

10/26/2020
by   Chien-Sheng Wu, et al.
0

Existing dialogue state tracking (DST) models require plenty of labeled data. However, collecting high-quality labels is costly, especially when the number of domains increases. In this paper, we address a practical DST problem that is rarely discussed, i.e., learning efficiently with limited labeled data. We present and investigate two self-supervised objectives: preserving latent consistency and modeling conversational behavior. We encourage a DST model to have consistent latent distributions given a perturbed input, making it more robust to an unseen scenario. We also add an auxiliary utterance generation task, modeling a potential correlation between conversational behavior and dialogue states. The experimental results show that our proposed self-supervised signals can improve joint goal accuracy by 8.95% when only 1% labeled data is used on the MultiWOZ dataset. We can achieve an additional 1.76% improvement if some unlabeled data is jointly trained as semi-supervised learning. We analyze and visualize how our proposed self-supervised signals help the DST task and hope to stimulate future data-efficient DST research.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/03/2020

Self-semi-supervised Learning to Learn from NoisyLabeled Data

The remarkable success of today's deep neural networks highly depends on...
08/17/2021

Investigating a Baseline Of Self Supervised Learning Towards Reducing Labeling Costs For Image Classification

Data labeling in supervised learning is considered an expensive and infe...
02/16/2021

Large-Context Conversational Representation Learning: Self-Supervised Learning for Conversational Documents

This paper presents a novel self-supervised learning method for handling...
02/09/2021

Conversational Query Rewriting with Self-supervised Learning

Context modeling plays a critical role in building multi-turn dialogue s...
05/25/2022

DialogZoo: Large-Scale Dialog-Oriented Task Learning

Building unified conversational agents has been a long-standing goal of ...
04/08/2019

Semi-Supervised Few-Shot Learning for Dual Question-Answer Extraction

This paper addresses the problem of key phrase extraction from sentences...
07/28/2021

Social Processes: Self-Supervised Forecasting of Nonverbal Cues in Social Conversations

The default paradigm for the forecasting of human behavior in social con...