Searchable Hidden Intermediates for End-to-End Models of Decomposable Sequence Tasks

05/02/2021
by   Siddharth Dalmia, et al.
7

End-to-end approaches for sequence tasks are becoming increasingly popular. Yet for complex sequence tasks, like speech translation, systems that cascade several models trained on sub-tasks have shown to be superior, suggesting that the compositionality of cascaded systems simplifies learning and enables sophisticated search capabilities. In this work, we present an end-to-end framework that exploits compositionality to learn searchable hidden representations at intermediate stages of a sequence model using decomposed sub-tasks. These hidden intermediates can be improved using beam search to enhance the overall performance and can also incorporate external models at intermediate stages of the network to re-score or adapt towards out-of-domain data. One instance of the proposed framework is a Multi-Decoder model for speech translation that extracts the searchable hidden intermediates from a speech recognition sub-task. The model demonstrates the aforementioned benefits and outperforms the previous state-of-the-art by around +6 and +3 BLEU on the two test sets of Fisher-CallHome and by around +3 and +4 BLEU on the English-German and English-French test sets of MuST-C.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2017

Sequence-to-Sequence Models Can Directly Translate Foreign Speech

We present a recurrent encoder-decoder deep neural network architecture ...
research
10/31/2022

Textless Direct Speech-to-Speech Translation with Discrete Speech Representation

Research on speech-to-speech translation (S2ST) has progressed rapidly i...
research
06/03/2020

Self-Training for End-to-End Speech Translation

One of the main challenges for end-to-end speech translation is data sca...
research
07/06/2021

The NiuTrans End-to-End Speech Translation System for IWSLT 2021 Offline Task

This paper describes the submission of the NiuTrans end-to-end speech tr...
research
09/27/2021

Integrated Training for Sequence-to-Sequence Models Using Non-Autoregressive Transformer

Complex natural language applications such as speech translation or pivo...
research
09/05/2017

Sequence Prediction with Neural Segmental Models

Segments that span contiguous parts of inputs, such as phonemes in speec...
research
04/14/2020

Speech Translation and the End-to-End Promise: Taking Stock of Where We Are

Over its three decade history, speech translation has experienced severa...

Please sign up or login with your details

Forgot password? Click here to reset