The NPU-MSXF Speech-to-Speech Translation System for IWSLT 2023 Speech-to-Speech Translation Task

07/10/2023
by   Kun Song, et al.
0

This paper describes the NPU-MSXF system for the IWSLT 2023 speech-to-speech translation (S2ST) task which aims to translate from English speech of multi-source to Chinese speech. The system is built in a cascaded manner consisting of automatic speech recognition (ASR), machine translation (MT), and text-to-speech (TTS). We make tremendous efforts to handle the challenging multi-source input. Specifically, to improve the robustness to multi-source speech input, we adopt various data augmentation strategies and a ROVER-based score fusion on multiple ASR model outputs. To better handle the noisy ASR transcripts, we introduce a three-stage fine-tuning strategy to improve translation accuracy. Finally, we build a TTS model with high naturalness and sound quality, which leverages a two-stage framework, using network bottleneck features as a robust intermediate representation for speaker timbre and linguistic content disentanglement. Based on the two-stage framework, pre-trained speaker embedding is leveraged as a condition to transfer the speaker timbre in the source English speech to the translated Chinese speech. Experimental results show that our system has high translation accuracy, speech naturalness, sound quality, and speaker similarity. Moreover, it shows good robustness to multi-source data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2021

The HW-TSC's Offline Speech Translation Systems for IWSLT 2021 Evaluation

This paper describes our work in participation of the IWSLT-2021 offline...
research
02/27/2020

SkinAugment: Auto-Encoding Speaker Conversions for Automatic Speech Translation

We propose autoencoding speaker conversion for training data augmentatio...
research
09/14/2019

Leveraging Out-of-Task Data for End-to-End Automatic Speech Translation

For automatic speech translation (AST), end-to-end approaches are outper...
research
10/31/2022

Robust MelGAN: A robust universal neural vocoder for high-fidelity TTS

In current two-stage neural text-to-speech (TTS) paradigm, it is ideal t...
research
05/26/2023

Robustness of Multi-Source MT to Transcription Errors

Automatic speech translation is sensitive to speech recognition errors, ...
research
06/28/2023

Cascaded encoders for fine-tuning ASR models on overlapped speech

Multi-talker speech recognition (MT-ASR) has been shown to improve ASR p...
research
11/17/2022

Back-Translation-Style Data Augmentation for Mandarin Chinese Polyphone Disambiguation

Conversion of Chinese Grapheme-to-Phoneme (G2P) plays an important role ...

Please sign up or login with your details

Forgot password? Click here to reset