Semi-Supervised Spoken Language Understanding via Self-Supervised Speech and Language Model Pretraining

10/26/2020
by   Cheng-I Lai, et al.
0

Much recent work on Spoken Language Understanding (SLU) is limited in at least one of three ways: models were trained on oracle text input and neglected ASR errors, models were trained to predict only intents without the slot values, or models were trained on a large amount of in-house data. In this paper, we propose a clean and general framework to learn semantics directly from speech with semi-supervision from transcribed or untranscribed speech to address these issues. Our framework is built upon pretrained end-to-end (E2E) ASR and self-supervised language models, such as BERT, and fine-tuned on a limited amount of target SLU data. We study two semi-supervised settings for the ASR component: supervised pretraining on transcribed speech, and unsupervised pretraining by replacing the ASR encoder with self-supervised speech representations, such as wav2vec. In parallel, we identify two essential criteria for evaluating SLU models: environmental noise-robustness and E2E semantics evaluation. Experiments on ATIS show that our SLU framework with speech as input can perform on par with those using oracle text as input in semantics understanding, even though environmental noise is present and a limited amount of labeled semantics data is available for training.

READ FULL TEXT

page 2

page 3

research
11/11/2020

Towards Semi-Supervised Semantics Understanding from Speech

Much recent work on Spoken Language Understanding (SLU) falls short in a...
research
02/11/2021

Speech-language Pre-training for End-to-end Spoken Language Understanding

End-to-end (E2E) spoken language understanding (SLU) can infer semantics...
research
11/03/2020

Warped Language Models for Noise Robust Language Understanding

Masked Language Models (MLM) are self-supervised neural networks trained...
research
06/14/2022

Self-Supervision on Images and Text Reduces Reliance on Visual Shortcut Features

Deep learning models trained in a fully supervised manner have been show...
research
07/13/2023

Leveraging Pretrained ASR Encoders for Effective and Efficient End-to-End Speech Intent Classification and Slot Filling

We study speech intent classification and slot filling (SICSF) by propos...
research
11/15/2022

Introducing Semantics into Speech Encoders

Recent studies find existing self-supervised speech encoders contain pri...
research
10/21/2022

Deep LSTM Spoken Term Detection using Wav2Vec 2.0 Recognizer

In recent years, the standard hybrid DNN-HMM speech recognizers are outp...

Please sign up or login with your details

Forgot password? Click here to reset