Exploring Transfer Learning For End-to-End Spoken Language Understanding

12/15/2020
by   Subendhu Rongali, et al.
0

Voice Assistants such as Alexa, Siri, and Google Assistant typically use a two-stage Spoken Language Understanding pipeline; first, an Automatic Speech Recognition (ASR) component to process customer speech and generate text transcriptions, followed by a Natural Language Understanding (NLU) component to map transcriptions to an actionable hypothesis. An end-to-end (E2E) system that goes directly from speech to a hypothesis is a more attractive option. These systems were shown to be smaller, faster, and better optimized. However, they require massive amounts of end-to-end training data and in addition, don't take advantage of the already available ASR and NLU training data. In this work, we propose an E2E system that is designed to jointly train on multiple speech-to-text tasks, such as ASR (speech-transcription) and SLU (speech-hypothesis), and text-to-text tasks, such as NLU (text-hypothesis). We call this the Audio-Text All-Task (AT-AT) Model and we show that it beats the performance of E2E models trained on individual tasks, especially ones trained on limited data. We show this result on an internal music dataset and two public datasets, FluentSpeech and SNIPS Audio, where we achieve state-of-the-art results. Since our model can process both speech and text input sequences and learn to predict a target sequence, it also allows us to do zero-shot E2E SLU by training on only text-hypothesis data (without any speech) from a new domain. We evaluate this ability of our model on the Facebook TOP dataset and set a new benchmark for zeroshot E2E performance. We will soon release the audio data collected for the TOP dataset for future research.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/04/2022

Deliberation Model for On-Device Spoken Language Understanding

We propose a novel deliberation-based approach to end-to-end (E2E) spoke...
03/23/2021

Hallucination of speech recognition errors with sequence to sequence learning

Automatic Speech Recognition (ASR) is an imperfect process that results ...
04/08/2021

RNN Transducer Models For Spoken Language Understanding

We present a comprehensive study on building and adapting RNN transducer...
11/11/2020

Towards Semi-Supervised Semantics Understanding from Speech

Much recent work on Spoken Language Understanding (SLU) falls short in a...
06/18/2019

Curriculum-based transfer learning for an effective end-to-end spoken language understanding and domain portability

We present an end-to-end approach to extract semantic concepts directly ...
11/10/2020

A low latency ASR-free end to end spoken language understanding system

In recent years, developing a speech understanding system that classifie...
12/06/2019

Audio-attention discriminative language model for ASR rescoring

End-to-end approaches for automatic speech recognition (ASR) benefit fro...