Learning Online Alignments with Continuous Rewards Policy Gradient

08/03/2016
by   Yuping Luo, et al.
0

Sequence-to-sequence models with soft attention had significant success in machine translation, speech recognition, and question answering. Though capable and easy to use, they require that the entirety of the input sequence is available at the beginning of inference, an assumption that is not valid for instantaneous translation and speech recognition. To address this problem, we present a new method for solving sequence-to-sequence problems using hard online alignments instead of soft offline alignments. The online alignments model is able to start producing outputs without the need to first process the entire input sequence. A highly accurate online sequence-to-sequence model is useful because it can be used to build an accurate voice-based instantaneous translator. Our model uses hard binary stochastic decisions to select the timesteps at which outputs will be produced. The model is trained to produce these stochastic decisions using a standard policy gradient method. In our experiments, we show that this model achieves encouraging performance on TIMIT and Wall Street Journal (WSJ) speech recognition datasets.

READ FULL TEXT
research
06/16/2017

An online sequence-to-sequence model for noisy speech recognition

Generative models have long been the dominant approach for speech recogn...
research
10/10/2016

Latent Sequence Decompositions

We present the Latent Sequence Decompositions (LSD) framework. LSD decom...
research
04/03/2017

Online and Linear-Time Attention by Enforcing Monotonic Alignments

Recurrent neural network models with an attention mechanism have proven ...
research
12/08/2016

Towards better decoding and language model integration in sequence to sequence models

The recently proposed Sequence-to-Sequence (seq2seq) framework advocates...
research
04/25/2022

Supervised Attention in Sequence-to-Sequence Models for Speech Recognition

Attention mechanism in sequence-to-sequence models is designed to model ...
research
12/05/2017

Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model

Sequence-to-sequence models provide a simple and elegant solution for bu...
research
10/02/2018

Optimal Completion Distillation for Sequence Learning

We present Optimal Completion Distillation (OCD), a training procedure f...

Please sign up or login with your details

Forgot password? Click here to reset