Out-of-Task Training for Dialog State Tracking Models

11/18/2020
by   Michael Heck, et al.
0

Dialog state tracking (DST) suffers from severe data sparsity. While many natural language processing (NLP) tasks benefit from transfer learning and multi-task learning, in dialog these methods are limited by the amount of available data and by the specificity of dialog applications. In this work, we successfully utilize non-dialog data from unrelated NLP tasks to train dialog state trackers. This opens the door to the abundance of unrelated NLP corpora to mitigate the data sparsity issue inherent to DST.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/19/2021

Multi-Task Learning in Natural Language Processing: An Overview

Deep learning approaches have achieved great success in the field of Nat...
01/24/2021

Does Dialog Length matter for Next Response Selection task? An Empirical Study

In the last few years, the release of BERT, a multilingual transformer b...
01/23/2020

Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data Augmentation

Recent works have shown that generative data augmentation, where synthet...
02/20/2017

The Dialog State Tracking Challenge with Bayesian Approach

Generative model has been one of the most common approaches for solving ...
01/10/2021

Transfer Learning and Augmentation for Word Sense Disambiguation

Many downstream NLP tasks have shown significant improvement through con...
04/22/2018

Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities

Many natural language processing tasks require dealing with Named Entiti...
05/29/2021

Annotation Inconsistency and Entity Bias in MultiWOZ

MultiWOZ is one of the most popular multi-domain task-oriented dialog da...