Out-of-Task Training for Dialog State Tracking Models

11/18/2020
by   Michael Heck, et al.
0

Dialog state tracking (DST) suffers from severe data sparsity. While many natural language processing (NLP) tasks benefit from transfer learning and multi-task learning, in dialog these methods are limited by the amount of available data and by the specificity of dialog applications. In this work, we successfully utilize non-dialog data from unrelated NLP tasks to train dialog state trackers. This opens the door to the abundance of unrelated NLP corpora to mitigate the data sparsity issue inherent to DST.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2021

Multi-Task Learning in Natural Language Processing: An Overview

Deep learning approaches have achieved great success in the field of Nat...
research
01/24/2021

Does Dialog Length matter for Next Response Selection task? An Empirical Study

In the last few years, the release of BERT, a multilingual transformer b...
research
01/23/2020

Variational Hierarchical Dialog Autoencoder for Dialog State Tracking Data Augmentation

Recent works have shown that generative data augmentation, where synthet...
research
05/24/2023

Frugal Prompting for Dialog Models

The use of large language models (LLMs) in natural language processing (...
research
02/20/2017

The Dialog State Tracking Challenge with Bayesian Approach

Generative model has been one of the most common approaches for solving ...
research
04/22/2018

Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities

Many natural language processing tasks require dealing with Named Entiti...
research
04/07/2023

Gated Mechanism Enhanced Multi-Task Learning for Dialog Routing

Currently, human-bot symbiosis dialog systems, e.g., pre- and after-sale...

Please sign up or login with your details

Forgot password? Click here to reset