Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents

11/13/2018
by   Aditya Siddhant, et al.
0

User interaction with voice-powered agents generates large amounts of unlabeled utterances. In this paper, we explore techniques to efficiently transfer the knowledge from these unlabeled utterances to improve model performance on Spoken Language Understanding (SLU) tasks. We use Embeddings from Language Model (ELMo) to take advantage of unlabeled data by learning contextualized word representations. Additionally, we propose ELMo-Light (ELMoL), a faster and simpler unsupervised pre-training method for SLU. Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer. Additionally, we show that the gains from unsupervised transfer techniques can be further improved by supervised transfer. The improvements are more pronounced in low resource settings and when using only 1000 labeled in-domain samples, our techniques match the performance of training from scratch on 10-15x more labeled in-domain data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2020

Transfer Learning for Context-Aware Spoken Language Understanding

Spoken language understanding (SLU) is a key component of task-oriented ...
research
05/22/2020

Automatic Discovery of Novel Intents Domains from Text Utterances

One of the primary tasks in Natural Language Understanding (NLU) is to r...
research
10/05/2020

Self-training Improves Pre-training for Natural Language Understanding

Unsupervised pre-training has led to much recent progress in natural lan...
research
02/13/2020

Pre-Training for Query Rewriting in A Spoken Language Understanding System

Query rewriting (QR) is an increasingly important technique to reduce cu...
research
10/09/2020

Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding

Neural models have yielded state-of-the-art results in deciphering spoke...
research
06/28/2022

Bottleneck Low-rank Transformers for Low-resource Spoken Language Understanding

End-to-end spoken language understanding (SLU) systems benefit from pret...
research
04/22/2018

Efficient Large-Scale Domain Classification with Personalized Attention

In this paper, we explore the task of mapping spoken language utterances...

Please sign up or login with your details

Forgot password? Click here to reset