Out-of-Task Training for Dialog State Tracking Models

by   Michael Heck, et al.

Dialog state tracking (DST) suffers from severe data sparsity. While many natural language processing (NLP) tasks benefit from transfer learning and multi-task learning, in dialog these methods are limited by the amount of available data and by the specificity of dialog applications. In this work, we successfully utilize non-dialog data from unrelated NLP tasks to train dialog state trackers. This opens the door to the abundance of unrelated NLP corpora to mitigate the data sparsity issue inherent to DST.


page 1

page 2

page 3

page 4


Multi-Task Learning in Natural Language Processing: An Overview

Deep learning approaches have achieved great success in the field of Nat...

Does Dialog Length matter for Next Response Selection task? An Empirical Study

In the last few years, the release of BERT, a multilingual transformer b...

Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data Augmentation

Recent works have shown that generative data augmentation, where synthet...

The Dialog State Tracking Challenge with Bayesian Approach

Generative model has been one of the most common approaches for solving ...

Transfer Learning and Augmentation for Word Sense Disambiguation

Many downstream NLP tasks have shown significant improvement through con...

Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities

Many natural language processing tasks require dealing with Named Entiti...

Annotation Inconsistency and Entity Bias in MultiWOZ

MultiWOZ is one of the most popular multi-domain task-oriented dialog da...