1 Introduction
The recently introduced sequencetosequence model has shown success in many tasks that map sequences to sequences, e.g., translation, speech recognition, image captioning and dialogue modeling sutskevernips2014 ; choemnlp2014 ; bahdanauiclr2015 ; chorowskinips2015 ; chan2015listen ; vinyalsarvix2014 ; vinyals2015grammar ; sordoni2015neural ; vinyals2015neural . However, this method is unsuitable for tasks where it is important to produce outputs as the input sequence arrives. Speech recognition is an example of such an online task – users prefer seeing an ongoing transcription of speech over receiving it at the “end” of an utterance. Similarly, instant translation systems would be much more effective if audio was translated online, rather than after entire utterances. This limitation of the sequencetosequence model is due to the fact that output predictions are conditioned on the entire input sequence.
In this paper, we present a Neural Transducer, a more general class of sequencetosequence learning models. Neural Transducer can produce chunks of outputs (possibly of zero length) as blocks of inputs arrive  thus satisfying the condition of being “online” (see Figure 1(b) for an overview). The model generates outputs for each block by using a transducer RNN that implements a sequencetosequence model. The inputs to the transducer RNN come from two sources: the encoder RNN and its own recurrent state. In other words, the transducer RNN generates local extensions to the output sequence, conditioned on the features computed for the block by an encoder RNN and the recurrent state of the transducer RNN at the last step of the previous block.
During training, alignments of output symbols to the input sequence are unavailable. One way of overcoming this limitation is to treat the alignment as a latent variable and to marginalize over all possible values of this alignment variable. Another approach is to generate alignments from a different algorithm, and train our model to maximize the probability of these alignments. Connectionist Temporal Classification (CTC)
graves2013speechfollows the former strategy using a dynamic programming algorithm, that allows for easy marginalization over the unary potentials produced by a recurrent neural network (RNN). However, this is not possible in our model, since the neural network makes nextstep predictions that are conditioned not just on the input data, but on the alignment, and the targets produced until the current step. In this paper, we show how a dynamic programming algorithm, can be used to compute "approximate" best alignments from this model. We show that training our model on these alignments leads to strong results.
On the TIMIT phoneme recognition task, a Neural Transducer (with 3 layered unidirectional LSTM encoder and 3 layered unidirectional LSTM transducer) can achieve an accuracy of 20.8% phoneme error rate (PER) which is close to stateoftheart for unidirectional models. We show too that if good alignments are made available (e.g, from a GMMHMM system), the model can achieve 19.8% PER.
2 Related Work
In the past few years, many proposals have been made to add more power or flexibility to neural networks, especially via the concept of augmented memory graves2014neural ; sukhbaatar2015end ; zaremba2015reinforcement or augmented arithmetic units neelakantan2015neural ; reed2015neural . Our work is not concerned with memory or arithmetic components but it allows more flexibility in the model so that it can dynamically produce outputs as data come in.
Our work is related to traditional structured prediction methods, commonplace in speech recognition. The work bears similarity to HMMDNN hinton2012deep and CTC graves2013speech systems. An important aspect of these approaches is that the model makes predictions at every input time step. A weakness of these models is that they typically assume conditional independence between the predictions at each output step.
Sequencetosequence models represent a breakthrough where no such assumptions are made – the output sequence is generated by next step prediction, conditioning on the entire input sequence and the partial output sequence generated so far chorowskinips2014 ; chorowskinips2015 ; chan2015listen . Figure 1(a) shows the highlevel picture of this architecture. However, as can be seen from the figure, these models have a limitation in that they have to wait until the end of the speech utterance to start decoding. This property makes them unattractive for real time speech recognition and online translation. Bahdanau et. al. BahdanauCSBB15 attempt to rectify this for speech recognition by using a moving windowed attention, but they do not provide a mechanism to address the situation that arises when no output can be produced from the windowed segment of data.
Figure 1(b) shows the difference between our method and sequencetosequence models.
A strongly related model is the sequence transducer gravesicml2012 ; gravesicassp2013 . This model augments the CTC model by combining the transcription model with a prediction model. The prediction model is akin to a language model and operates only on the output tokens, as a next step prediction model. This gives the model more expressiveness compared to CTC which makes independent predictions at every time step. However, unlike the model presented in this paper, the two models in the sequence transducer operate independently – the model does not provide a mechanism by which the prediction network features at one time step would change the transcription network features in the future, and vice versa. Our model, in effect both generalizes this model and the sequence to sequence model.
Our formulation requires inferring alignments during training. However, our results indicate that this can be done relatively fast, and with little loss of accuracy, even on a small dataset where no effort was made at regularization. Further, if alignments are given, as is easily done offline for various tasks, the model is able to train relatively fast, without this inference step.
3 Methods
In this section we describe the model in more detail. Please refer to Figure 2 for an overview.
3.1 Model
Let be the input data that is time steps long, where represents the features at input time step . Let be the block size, i.e., the periodicity with which the transducer emits output tokens, and be the number of blocks.
Let be the target sequence, corresponding to the input sequence. Further, let the transducer produce a sequence of outputs, , where
, for any input block. Each such sequence is padded with the
e symbol, that is added to the vocabulary. It signifies that the transducer may proceed and consume data from the next block. When no symbols are produced for a block, this symbol is akin to the blank symbol of CTC.The sequence can be transduced from the input from various alignments. Let be the set of all alignments of the output sequence to the input blocks. Let be any such alignment. Note that the length of is more than the length of , since there are end of block symbols, e, in . However, the number of sequences matching to is much larger, corresponding to all possible alignments of to the blocks. The block that element is aligned to can be inferred simply by counting the number of e symbols that came before index . Let, be the index of the last token in emitted in the block. Note that and . Thus <> for each block .
In this section, we show how to compute . Later, in section 3.5 we show how to compute, and maximize .
We first compute the probability of l compute the probability of seeing output sequence by the end of block as follows:
(1) 
Each of the terms in this equation is itself computed by the chain rule decomposition, i.e., for any block
,(2) 
The next step probability terms, , in Equation 2 are computed by the transducer using the encoding of the input computed by the encoder, and the label prefix that was input into the transducer, at previous emission steps. We describe this in more detail in the next subsection.
3.2 Next Step Prediction
We again refer the reader to Figure 2 for this discussion. The example shows a transducer with two hidden layers, with units and at output step . In the figure, the next step prediction is shown for block . For this block, the index of the first output symbol is , and the index of the last output symbol is (i.e. ).
The transducer computes the next step prediction, using parameters, , of the neural network through the following sequence of steps:
(3)  
(4)  
(5)  
(6) 
where is the recurrent neural network function (such as an LSTM or a sigmoid or tanh RNN) that computes the state vector for a layer at a step using the recurrent state vector at the last time step, and input at the current time step;^{1}^{1}1Note that for LSTM, we would have to additionally factor in cell states from the previous states  we have ignored this in the notation for purpose of clarity. The exact details are easily worked out.
is the softmax distribution computed by a softmax layer, with input vector
; and is the context function, that computes the input to the transducer at output step from the state at the current time step, and the features of the encoder for the current input block, . We experimented with different ways of computing the context vector – with and without an attention mechanism. These are described subsequently in section 3.3.Note that since the encoder is an RNN, is actually a function of the entire input, so far. Correspondingly, is a function of the labels emitted so far, and the entire input seen so far.^{2}^{2}2For the first output step of a block it includes only the input seen until the end of the last block. Similarly, is a function of the labels emitted so far and the entire input seen so far.
3.3 Computing
We first describe how the context vector is computed by an attention model similar to earlier work
chorowskinips2014 ; bahdanauiclr2015 ; chan2015listen . We call this model the MLPattention model.In this model the context vector is in computed in two steps  first a normalized attention vector is computed from the state of the transducer and next the hidden states of the encoder for the current block are linearly combined using and used as the context vector. To compute
, a multilayer perceptron computes a scalar value,
for each pair of transducer state and encoder . The attention vector is computed from the scalar values, , . Formally:(7)  
(8)  
(9) 
We also experimented with using a simpler model for that computed . We refer to this model as DOTattention model.
Both of these attention models have two shortcomings. Firstly there is no explicit mechanism that requires the attention model to move its focus forward, from one output time step to the next. Secondly, the energies computed as inputs to the softmax function, for different input frames are independent of each other at each time step, and thus cannot modulate (e.g., enhance or suppress) each other, other than through the softmax function. Chorowski et. al. chorowskinips2015 ameliorate the second problem by using a convolutional operator that affects the attention at one time step using the attention at the last time step.
We attempt to address these two shortcomings using a new attention mechanism. In this model, instead of feeding into a softmax, we feed them into a recurrent neural network with one hidden layer that outputs the softmax attention vector at each time step. Thus the model should be able to modulate the attention vector both within a time step and across time steps. This attention model is thus more general than the convolutional operator of Chorowski et. al. (2015), but it can only be applied to the case where the context window size is constant. We refer to this model as LSTMattention.
3.4 Addressing End of Blocks
Since the model only produces a small sequence of output tokens in each block, we have to address the mechanism for shifting the transducer from one block to the next. We experimented with three distinct ways of doing this. In the first approach, we introduced no explicit mechanism for endofblocks, hoping that the transducer neural network would implicitly learn a model from the training data. In the second approach we added endofblock symbols, <>, to the label sequence to demarcate the end of blocks, and we added this symbol to the target dictionary. Thus the softmax function in Equation 6 implicitly learns to either emit a token, or to move the transducer forward to the next block. In the third approach, we model moving the transducer forward, using a separate logistic function of the attention vector. The target of the logistic function is 0 or 1 depending on whether the current step is the last step in the block or not.
3.5 Training
In this section we show how the Neural Transducer model can be trained.
The probability of the output sequence , given is as follows^{3}^{3}3Note that this equation implicitly incorporates the prior for alignments within the equation:
(10) 
In theory, we can train the model by maximizing the log of equation 10. The gradient for the log likelihood can easily be expressed as follows:
(11) 
Each of the latter term in the sum on the right hand side can be computed, by backpropagation, using as the target of the model. However, the marginalization is intractable because of the sum over a combinatorial number of alignments. Alternatively, the gradient can be approximated by sampling from the posterior distribution (i.e. ). However, we found this had very large noise in the learning and the gradients were often too biased, leading to the models that rarely achieved decent accuracy.
Instead, we attempted to maximize the probability in equation 10 by computing the sum over only one term  corresponding to the
with the highest posterior probability. Unfortunately, even doing this exactly is computationally infeasible because the number of possible alignments is combinatorially large and the problem of finding the best alignment cannot be decomposed to easier subproblems. So we use an algorithm that finds the approximate best alignment with a dynamic programminglike algorithm that we describe in the next paragraph.
At each block, , for each output position , this algorithm keeps track of the approximate best hypothesis that represents the best partial alignment of the input sequence to the partial input . Each hypothesis, keeps track of the best alignment that it represents, and the recurrent states of the decoder at the last time step, corresponding to this alignment. At block , all hypotheses are extended by at most tokens using their recurrent states, to compute ^{4}^{4}4Note the minutiae that each of these extensions ends with e symbol.. For each position the highest log probability hypothesis is kept^{5}^{5}5We also experimented with sampling from the extensions in proportion to the probabilities, but this did not always improve results.. The alignment from the best hypothesis at the last block is used for training.
In theory, we need to compute the alignment for each sequence when it is trained, using the model parameters at that time. In practice, we batch the alignment inference steps, using parallel tasks, and cache these alignments. Thus alignments are computed less frequently than the model updates  typically every 100300 sequences. This procedure has the flavor of experience replay from Deep Reinforcement learning work
dqn .3.6 Inference
For inference, given the input acoustics , and the model parameters, , we find the sequence of labels that maximizes the probability of the labels, conditioned on the data, i.e.,
(12) 
Exact inference in this scheme is computationally expensive because the expression for log probability does not permit decomposition into smaller terms that can be independently computed. Instead, each candidate,
, would have to be tested independently, and the best sequence over an exponentially large number of sequences would have to be discovered. Hence, we use a beam search heuristic to find the “best” set of candidates. To do this, at each output step
, we keep a heap of alternative best prefixes, and extend each one by one symbol, trying out all the possible alternative extensions, keeping only the best extensions. Included in the beam search is the act of moving the attention to the next input block. The beam search ends either when the sequence is longer than a prespecified threshold, or when the end of token symbol is produced at the last block.4 Experiments and Results
4.1 Addition Toy Task
We experimented with the Neural Transducer on the toy task of adding two threedigit decimal numbers. The second number is presented in the reverse order, and so is the target output. Thus the model can produce the first output as soon as the first digit of the second number is observed. The model is able to learn this task with a very small number of units (both encoder and transducer are 1 layer unidirectional LSTM RNNs with 100 units).
As can be seen below, the model learns to output the digits as soon as the required information is available. Occasionally the model waits an extra step to output its target symbol. We show results (blue) for four different examples (red). A block window size of W=1 was used, with M=8.
2  +  7  2  5  <s>  2  2  7  +  3  <s>  

<e>  <e>  <e>  9<e>  2<e>  5<e>  <e>  <e>  <e>  <e>  <e>  032<e>  
1  7  4  +  3  <s>  4  0  +  2  6  2  <s> 
<e>  <e>  <e>  <e>  <e>  771<e>  <e>  <e>  <e>  <e>  2<e>  0<e>  3<e> 
The model achieves an error rate of 0% after 500K examples.
4.2 Timit
We used TIMIT, a standard benchmark for speech recognition, for our larger experiments. Log Mel filterbanks were computed every 10ms as inputs to the system. The targets were the 60 phones defined for the TIMIT dataset (h# were relabelled as pau).
We used stochastic gradient descent with momentum with a batch size of one utterance per training step. An initial learning rate of 0.05, and momentum of 0.9 was used. The learning rate was reduced by a factor of 0.5 every time the average log prob over the validation set decreased
^{6}^{6}6Note the TIMIT provides a validation set, called the dev set. We use these terms interchangeably.. The decrease was applied for a maximum of 4 times. The models were trained for 50 epochs and the parameters from the epochs with the best dev set log prob were used for decoding.
We trained a Neural Transducer with three layer LSTM RNN coupled to a three LSTM layer unidirectional encoder RNN, and achieved a PER of 20.8% on the TIMIT test set. This model used the LSTM attention mechanism. Alignments were generated from a model that was updated after every 300 steps of Momentum updates. Interestingly, the alignments generated by the model are very similar to the alignments produced by a Gaussian Mixture ModelHidden Markov Model (GMMHMM) system that we trained using the Kaldi toolkit – even though the model was trained entirely discriminatively. The small differences in alignment correspond to an occasional phoneme emitted slightly later by our model, compared to the GMMHMM system.
We also trained models using alignments generated from the GMMHMM model trained on Kaldi. The frame level alignments from Kaldi were converted into block level alignments by assigning each phone in the sequence to the block it was last observed in. The same architecture model described above achieved an accuracy of 19.8% with these alignments.
We did further experiments to assess the properties of the model. In order to avoid the computation associated with finding the best alignments, we ran these experiments using the GMMHMM alignments.
Table 1 shows a comparison of our method against a basic implementation of a sequencetosequence model that produces outputs for each block independent of the other blocks, and concatenates the produced sequences. Here, the sequencetosequence model produces the output conditioned on the state of the encoder at the end of the block. Both models used an encoder with two layers of 250 LSTM cells, without attention. The standard sequencetosequence model performs significantly worse than our model – the recurrent connections of the transducer across blocks are clearly helpful in improving the accuracy of the model.
W  BLOCKRECURRENCE  PER 

15  No  34.3 
15  Yes  20.6 
Figure 3 shows the impact of block size on the accuracy of the different transducer variants that we used. See Section 3.3 for a description of the {DOT,MLP,LSTM}attention models. All models used a two LSTM layer encoder and a two LSTM layer transducer. The model is sensitive to the choice of the block size, when no attention is used. However, it can be seen that with an appropriate choice of window size (W=8), the Neural Transducer without attention can match the accuracy of the attention based Neural Transducers. Further exploration of this configuration should lead to improved results.
When attention is used in the transducer, the precise value of the block size becomes less important. The LSTMbased attention model seems to be more consistent compared to the other attention mechanisms we explored. Since this model performed best with W=25, we used this configuration for subsequent experiments.
Table 2 explores the impact of the number of layers in the transducer and the encoder on the PER. A three layer encoder coupled to a three layer transducer performs best on average. Four layer transducers produced results with higher spread in accuracy – possibly because of the more difficult optimization involved. Thus, the best average PER we achieved (over 3 runs) was 19.8% on the TIMIT test set. These results could probably be improved with other regularization techniques, as reported by chorowskinips2015 but we did not pursue those avenues in this paper.
# of layers in encoder / transducer  1  2  3  4 

2  19.2  18.9  18.8  
3  18.5  18.2  19.4 
For a comparison with previously published sequencetosequence models on this task, we used a three layer bidirectional LSTM encoder with 250 LSTM cells in each direction and achieved a PER of 18.7%. By contrast, the best reported results using previous sequencetosequence models are 17.6% chorowskinips2015
. However, this result comes with more careful training techniques than we attempted for this work. Given the high variance in results from run to run on TIMIT, these numbers are quite promising.
5 Discussion
One of the important sideeffects of our model using partial conditioning with a blocked transducer is that it naturally alleviates the problem of “losing attention” suffered by sequencetosequence models. Because of this, sequencetosequence models perform worse on longer utterances chorowskinips2015 ; chan2015listen . This problem is automatically tackled in our model because each new block automatically shifts the attention monotonically forward. Within a block, the model learns to move attention forward from one step to the next, and the attention mechanism rarely suffers, because both the size of a block, and the number of output steps for a block are relatively small. As a result, error in attention in one block, has minimal impact on the predictions at subsequent blocks.
Finally, we note that increasing the block size, , so that it is as large as the input utterance makes the model similar to vanilla endtoend models chorowskinips2014 ; chan2015listen .
6 Conclusion
We have introduced a new model that uses partial conditioning on inputs to generate output sequences. This allows the model to produce output as input arrives. This is useful for speech recognition systems and will also be crucial for future generations of online speech translation systems. Further it can be useful for performing transduction over long sequences – something that is possibly difficult for sequencetosequence models. We applied the model to a toy task of addition, and to a phone recognition task and showed that is can produce results comparable to the state of the art from sequencetosequence models.
References
 (1) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations, 2015.
 (2) Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. Endtoend attentionbased large vocabulary speech recognition. In http://arxiv.org/abs/1508.04395, 2015.
 (3) William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015.

(4)
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi
Bougares, Holger Schwen, and Yoshua Bengio.
Learning Phrase Representations using RNN EncoderDecoder for
Statistical Machine Translation.
In
Conference on Empirical Methods in Natural Language Processing
, 2014. 
(5)
Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.
Endtoend Continuous Speech Recognition using Attentionbased
Recurrent NN: First Results.
In
Neural Information Processing Systems: Workshop Deep Learning and Representation Learning Workshop
, 2014.  (6) Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. AttentionBased Models for Speech Recognition. In Neural Information Processing Systems, 2015.
 (7) Alan Graves, Abdelrahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645–6649. IEEE, 2013.

(8)
Alex Graves.
Sequence Transduction with Recurrent Neural Networks.
In
International Conference on Machine Learning: Representation Learning Workshop
, 2012.  (9) Alex Graves, Abdelrahman Mohamed, and Geoffrey Hinton. Speech Recognition with Deep Recurrent Neural Networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2013.
 (10) Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
 (11) Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012.
 (12) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013.
 (13) Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015.
 (14) Scott Reed and Nando de Freitas. Neural programmerinterpreters. arXiv preprint arXiv:1511.06279, 2015.
 (15) Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, JianYun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to contextsensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015.
 (16) Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. Endtoend memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439, 2015.
 (17) Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to Sequence Learning with Neural Networks. In Neural Information Processing Systems, 2014.
 (18) Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a foreign language. In NIPS, 2015.
 (19) Oriol Vinyals and Quoc V. Le. A neural conversational model. In ICML Deep Learning Workshop, 2015.

(20)
Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan.
Show and Tell: A Neural Image Caption Generator.
In
IEEE Conference on Computer Vision and Pattern Recognition
, 2015.  (21) Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015.