Sequence-to-Sequence Models Can Directly Translate Foreign Speech

03/24/2017
by   Ron J. Weiss, et al.
0

We present a recurrent encoder-decoder deep neural network architecture that directly translates speech in one language into text in another. The model does not explicitly transcribe the speech into text in the source language, nor does it require supervision from the ground truth source language transcription during training. We apply a slightly modified sequence-to-sequence with attention architecture that has previously been used for speech recognition and show that it can be repurposed for this more complex task, illustrating the power of attention-based models. A single model trained end-to-end obtains state-of-the-art performance on the Fisher Callhome Spanish-English speech translation task, outperforming a cascade of independently trained sequence-to-sequence speech recognition and machine translation models by 1.8 BLEU points on the Fisher test set. In addition, we find that making use of the training data in both languages by multi-task training sequence-to-sequence speech translation and recognition models with a shared encoder network can improve performance by a further 1.4 BLEU points.

READ FULL TEXT
research
04/12/2019

Direct speech-to-speech translation with a sequence-to-sequence model

We present an attention-based sequence-to-sequence neural network which ...
research
02/13/2018

Structured-based Curriculum Learning for End-to-end English-Japanese Speech Translation

Sequence-to-sequence attentional-based neural network architectures have...
research
03/18/2019

Evaluating Sequence-to-Sequence Models for Handwritten Text Recognition

Encoder-decoder models have become an effective approach for sequence le...
research
05/22/2020

Low-Latency Sequence-to-Sequence Speech Recognition and Translation by Partial Hypothesis Selection

Encoder-decoder models provide a generic architecture for sequence-to-se...
research
05/02/2021

Searchable Hidden Intermediates for End-to-End Models of Decomposable Sequence Tasks

End-to-end approaches for sequence tasks are becoming increasingly popul...
research
05/20/2020

Investigation of learning abilities on linguistic features in sequence-to-sequence text-to-speech synthesis

Neural sequence-to-sequence text-to-speech synthesis (TTS) can produce h...
research
11/19/2015

Multi-task Sequence to Sequence Learning

Sequence to sequence learning has recently emerged as a new paradigm in ...

Please sign up or login with your details

Forgot password? Click here to reset