Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition

04/05/2017
by   Shubham Toshniwal, et al.
0

End-to-end training of deep learning-based models allows for implicit learning of intermediate representations based on the final task loss. However, the end-to-end approach ignores the useful domain knowledge encoded in explicit intermediate-level supervision. We hypothesize that using intermediate representations as auxiliary supervision at lower levels of deep networks may be a good way of combining the advantages of end-to-end training and more traditional pipeline approaches. We present experiments on conversational speech recognition where we use lower-level tasks, such as phoneme recognition, in a multitask training approach with an encoder-decoder model for direct character transcription. We compare multiple types of lower-level tasks and analyze the effects of the auxiliary tasks. Our results on the Switchboard corpus show that this approach improves recognition accuracy over a standard encoder-decoder model on the Eval2000 test set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2018

Hierarchical Multitask Learning for CTC-based Speech Recognition

Previous work has shown that neural encoder-decoder speech recognition c...
research
07/09/2022

Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder

Intermediate layer output (ILO) regularization by means of multitask tra...
research
05/19/2020

Investigations on Phoneme-Based End-To-End Speech Recognition

Common end-to-end models like CTC or encoder-decoder-attention models us...
research
08/07/2020

A New Approach to Accent Recognition and Conversion for Mandarin Chinese

Two new approaches to accent classification and conversion are presented...
research
11/25/2020

SAR-Net: A End-to-End Deep Speech Accent Recognition Network

This paper proposes a end-to-end deep network to recognize kinds of acce...
research
05/25/2021

Deep Neural Networks and End-to-End Learning for Audio Compression

Recent achievements in end-to-end deep learning have encouraged the expl...
research
05/21/2020

Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation

Speech translation (ST) aims to learn transformations from speech in the...

Please sign up or login with your details

Forgot password? Click here to reset