Zero-Shot Adaptive Transfer for Conversational Language Understanding

08/29/2018
by   Sungjin Lee, et al.
0

Conversational agents such as Alexa and Google Assistant constantly need to increase their language understanding capabilities by adding new domains. A massive amount of labeled data is required for training each new domain. While domain adaptation approaches alleviate the annotation cost, prior approaches suffer from increased training time and suboptimal concept alignments. To tackle this, we introduce a novel Zero-Shot Adaptive Transfer method for slot tagging that utilizes the slot description for transferring reusable concepts across domains, and enjoys efficient training without any explicit concept alignments. Extensive experimentation over a dataset of 10 domains relevant to our commercial personal digital assistant shows that our model outperforms previous state-of-the-art systems by a large margin, and achieves an even higher improvement in the low data regime.

READ FULL TEXT
research
06/03/2017

Concept Transfer Learning for Adaptive Language Understanding

Semantic transfer is an important problem of the language understanding ...
research
07/07/2017

Towards Zero-Shot Frame Semantic Parsing for Domain Scaling

State-of-the-art slot filling models for goal-oriented human/machine con...
research
01/24/2023

Low-Resource Compositional Semantic Parsing with Concept Pretraining

Semantic parsing plays a key role in digital voice assistants such as Al...
research
03/22/2020

Prior Knowledge Driven Label Embedding for Slot Filling in Natural Language Understanding

Traditional slot filling in natural language understanding (NLU) predict...
research
10/02/2020

MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale

We study the zero-shot transfer capabilities of text matching models on ...
research
07/18/2023

Zero-shot Query Reformulation for Conversational Search

As the popularity of voice assistants continues to surge, conversational...
research
01/24/2020

MT-BioNER: Multi-task Learning for Biomedical Named Entity Recognition using Deep Bidirectional Transformers

Conversational agents such as Cortana, Alexa and Siri are continuously w...

Please sign up or login with your details

Forgot password? Click here to reset