Inter-connection: Effective Connection between Pre-trained Encoder and Decoder for Speech Translation

05/26/2023
by   Yuta Nishikawa, et al.
0

In end-to-end speech translation, speech and text pre-trained models improve translation quality. Recently proposed models simply connect the pre-trained models of speech and text as encoder and decoder. Therefore, only the information from the final layer of encoders is input to the decoder. Since it is clear that the speech pre-trained model outputs different information from each layer, the simple connection method cannot fully utilize the information that the speech pre-trained model has. In this study, we propose an inter-connection mechanism that aggregates the information from each layer of the speech pre-trained model by weighted sums and inputs into the decoder. This mechanism increased BLEU by approximately 2 points in en-de, en-ja, and en-zh by increasing parameters by 2K when the speech pre-trained model was frozen. Furthermore, we investigated the contribution of each layer for each language by visualizing layer weights and found that the contributions were different.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/03/2022

M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation

End-to-end speech-to-text translation models are often initialized with ...
research
09/29/2022

Facial Landmark Predictions with Applications to Metaverse

This research aims to make metaverse characters more realistic by adding...
research
03/03/2023

Pre-trained Model Representations and their Robustness against Noise for Speech Emotion Analysis

Pre-trained model representations have demonstrated state-of-the-art per...
research
05/17/2022

Composing General Audio Representation by Fusing Multilayer Features of a Pre-trained Model

Many application studies rely on audio DNN models pre-trained on a large...
research
11/09/2022

Efficient Speech Translation with Pre-trained Models

When building state-of-the-art speech translation models, the need for l...
research
07/02/2022

Speech Emotion: Investigating Model Representations, Multi-Task Learning and Knowledge Distillation

Estimating dimensional emotions, such as activation, valence and dominan...
research
09/06/2023

Matcha-TTS: A fast TTS architecture with conditional flow matching

We introduce Matcha-TTS, a new encoder-decoder architecture for speedy T...

Please sign up or login with your details

Forgot password? Click here to reset