A Neural Transducer

11/16/2015
by   Navdeep Jaitly, et al.
0

Sequence-to-sequence models have achieved impressive results on various tasks. However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences. This is because they generate an output sequence conditioned on an entire input sequence. In this paper, we present a Neural Transducer that can make incremental predictions as more input arrives, without redoing the entire computation. Unlike sequence-to-sequence models, the Neural Transducer computes the next-step distribution conditioned on the partially observed input sequence and the partially generated sequence. At each time step, the transducer can decide to emit zero to many output symbols. The data can be processed using an encoder and presented as input to the transducer. The discrete decision to emit a symbol at every time step makes it difficult to learn with conventional backpropagation. It is however possible to train the transducer by using a dynamic programming algorithm to generate target discrete decisions. Our experiments show that the Neural Transducer works well in settings where it is required to produce output predictions as data come in. We also find that the Neural Transducer performs well for long sequences even when attention mechanisms are not used.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2020

Sequence-to-Sequence Learning via Attention Transfer for Incremental Speech Recognition

Attention-based sequence-to-sequence automatic speech recognition (ASR) ...
research
06/09/2015

Pointer Networks

We introduce a new neural architecture to learn the conditional probabil...
research
05/08/2016

Chained Predictions Using Convolutional Neural Networks

In this paper, we present an adaptation of the sequence-to-sequence mode...
research
03/22/2021

Alleviate Exposure Bias in Sequence Prediction with Recurrent Neural Networks

A popular strategy to train recurrent neural networks (RNNs), known as “...
research
08/07/2020

Incremental Text to Speech for Neural Sequence-to-Sequence Models using Reinforcement Learning

Modern approaches to text to speech require the entire input character s...
research
08/03/2022

SpanDrop: Simple and Effective Counterfactual Learning for Long Sequences

Distilling supervision signal from a long sequence to make predictions i...
research
11/19/2015

Order Matters: Sequence to sequence for sets

Sequences have become first class citizens in supervised learning thanks...

Please sign up or login with your details

Forgot password? Click here to reset