End-to-end Speech Translation via Cross-modal Progressive Training

04/21/2021 ∙ by Rong Ye, et al. ∙ 0

End-to-end speech translation models have become a new trend in the research due to their potential of reducing error propagation. However, these models still suffer from the challenge of data scarcity. How to effectively make use of unlabeled or other parallel corpora from machine translation is promising but still an open problem. In this paper, we propose Cross Speech-Text Network (XSTNet), an end-to-end model for speech-to-text translation. XSTNet takes both speech and text as input and outputs both transcription and translation text. The model benefits from its three key design aspects: a self supervising pre-trained sub-network as the audio encoder, a multi-task training objective to exploit additional parallel bilingual text, and a progressive training procedure. We evaluate the performance of XSTNet and baselines on the MuST-C En-De/Fr/Ru datasets. XSTNet achieves state-of-the-art results on all three language directions with an average BLEU of 27.8, outperforming the previous best method by 3.7 BLEU. The code and the models will be released to the public.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.