DeepAI AI Chat
Log In Sign Up

Locale-agnostic Universal Domain Classification Model in Spoken Language Understanding

by   JIhwan Lee, et al.

In this paper, we introduce an approach for leveraging available data across multiple locales sharing the same language to 1) improve domain classification model accuracy in Spoken Language Understanding and user experience even if new locales do not have sufficient data and 2) reduce the cost of scaling the domain classifier to a large number of locales. We propose a locale-agnostic universal domain classification model based on selective multi-task learning that learns a joint representation of an utterance over locales with different sets of domains and allows locales to share knowledge selectively depending on the domains. The experimental results demonstrate the effectiveness of our approach on domain classification task in the scenario of multiple locales with imbalanced data and disparate domain sets. The proposed approach outperforms other baselines models especially when classifying locale-specific domains and also low-resourced domains.


page 1

page 2

page 3

page 4


Multi-Domain Spoken Language Understanding Using Domain- and Task-Aware Parameterization

Spoken language understanding has been addressed as a supervised learnin...

Coupled Representation Learning for Domains, Intents and Slots in Spoken Language Understanding

Representation learning is an essential problem in a wide range of appli...

Automatic Data Expansion for Customer-care Spoken Language Understanding

Spoken language understanding (SLU) systems are widely used in handling ...

Domain Adaptation of Recurrent Neural Networks for Natural Language Understanding

The goal of this paper is to use multi-task learning to efficiently scal...

Capsule Networks for Low Resource Spoken Language Understanding

Designing a spoken language understanding system for command-and-control...

Unsupervised Spoken Utterance Classification

An intelligent virtual assistant (IVA) enables effortless conversations ...