Multi-Domain Adversarial Learning for Slot Filling in Spoken Language Understanding

11/30/2017
by   Bing Liu, et al.
0

The goal of this paper is to learn cross-domain representations for slot filling task in spoken language understanding (SLU). Most of the recently published SLU models are domain-specific ones that work on individual task domains. Annotating data for each individual task domain is both financially costly and non-scalable. In this work, we propose an adversarial training method in learning common features and representations that can be shared across multiple domains. Model that produces such shared representations can be combined with models trained on individual domain SLU data to reduce the amount of training samples required for developing a new domain. In our experiments using data sets from multiple domains, we show that adversarial training helps in learning better domain-general SLU models, leading to improved slot filling F1 scores. We further show that applying adversarial learning on domain-general model also helps in achieving higher slot filling performance when the model is jointly optimized with domain-specific models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2020

Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware Parameterization

Spoken language understanding has been addressed as a supervised learnin...
research
10/26/2018

Parsing Coordination for Spoken Language Understanding

Typical spoken language understanding systems provide narrow semantic pa...
research
05/09/2018

Cross Domain Regularization for Neural Ranking Models Using Adversarial Learning

Unlike traditional learning to rank models that depend on hand-crafted f...
research
09/25/2018

A Re-ranker Scheme for Integrating Large Scale NLU models

Large scale Natural Language Understanding (NLU) systems are typically t...
research
05/16/2018

What's in a Domain? Learning Domain-Robust Text Representations using Adversarial Training

Most real world language problems require learning from heterogenous cor...
research
08/19/2018

Source-Critical Reinforcement Learning for Transferring Spoken Language Understanding to a New Language

To deploy a spoken language understanding (SLU) model to a new language,...
research
12/13/2018

Coupled Representation Learning for Domains, Intents and Slots in Spoken Language Understanding

Representation learning is an essential problem in a wide range of appli...

Please sign up or login with your details

Forgot password? Click here to reset