Style Attuned Pre-training and Parameter Efficient Fine-tuning for Spoken Language Understanding

10/09/2020
by   Jin Cao, et al.
9

Neural models have yielded state-of-the-art results in deciphering spoken language understanding (SLU) problems; however, these models require a significant amount of domain-specific labeled examples for training, which is prohibitively expensive. While pre-trained language models like BERT have been shown to capture a massive amount of knowledge by learning from unlabeled corpora and solve SLU using fewer labeled examples for adaption, the encoding of knowledge is implicit and agnostic to downstream tasks. Such encoding results in model inefficiencies in parameter usage: an entirely new model is required for every domain. To address these challenges, we introduce a novel SLU framework, comprising a conversational language modeling (CLM) pre-training task and a light encoder architecture. The CLM pre-training enables networks to capture the representation of the language in conversation style with the presence of ASR errors. The light encoder architecture separates the shared pre-trained networks from the mappings of generally encoded knowledge to specific domains of SLU, allowing for the domain adaptation to be performed solely at the light encoder and thus increasing efficiency. With the framework, we match the performance of state-of-the-art SLU results on Alexa internal datasets and on two public ones (ATIS, SNIPS), adding only 4.4 task.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/15/2022

The Effects of In-domain Corpus Size on pre-training BERT

Many prior language modeling efforts have shown that pre-training on an ...
research
05/22/2023

TADA: Efficient Task-Agnostic Domain Adaptation for Transformers

Intermediate training of pre-trained transformer-based language models o...
research
04/21/2020

Train No Evil: Selective Masking for Task-guided Pre-training

Recently, pre-trained language models mostly follow the pre-training-the...
research
07/05/2022

ASR-Generated Text for Language Model Pre-training Applied to Speech Tasks

We aim at improving spoken language modeling (LM) using very large amoun...
research
02/13/2020

Pre-Training for Query Rewriting in A Spoken Language Understanding System

Query rewriting (QR) is an increasingly important technique to reduce cu...
research
11/13/2018

Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents

User interaction with voice-powered agents generates large amounts of un...
research
03/04/2021

A Survey on Spoken Language Understanding: Recent Advances and New Frontiers

Spoken Language Understanding (SLU) aims to extract the semantics frame ...

Please sign up or login with your details

Forgot password? Click here to reset