Tied Multitask Learning for Neural Speech Translation

02/19/2018
by   Antonios Anastasopoulos, et al.
0

We explore multitask models for neural translation of speech, augmenting them in order to reflect two intuitive notions. First, we introduce a model where the second task decoder receives information from the decoder of the first task, since higher-level intermediate representations should provide useful information. Second, we apply regularization that encourages transitivity and invertibility. We show that the application of these notions on jointly trained models improves performance on the tasks of low-resource speech transcription and translation. It also leads to better performance when using attention information for word discovery over unsegmented input.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation

Speech translation (ST) aims to learn transformations from speech in the...
research
07/17/2018

Hierarchical Multitask Learning for CTC-based Speech Recognition

Previous work has shown that neural encoder-decoder speech recognition c...
research
07/12/2021

Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task

Pretraining and multitask learning are widely used to improve the speech...
research
11/24/2022

Multitask Learning for Low Resource Spoken Language Understanding

We explore the benefits that multitask learning offer to speech processi...
research
03/24/2018

Low-Resource Speech-to-Text Translation

Speech-to-text translation has many potential applications for low-resou...
research
05/01/2021

AlloST: Low-resource Speech Translation without Source Transcription

The end-to-end architecture has made promising progress in speech transl...

Please sign up or login with your details

Forgot password? Click here to reset