I Introduction
Generative models have long been the bread and butter of traditional speech recognition techniques. Using these models, transcription is typically performed by Maximumaposteriori (MAP) estimation of the word sequence, given a trained generative model and an acoustic observation. Gaussian Mixture Models (GMMs) were the dominant models for instantaneous emission distributions and were coupled to Hidden Markov Models (HMMs) to model the dynamics. While posteriors from the GMM’s have been supplanted by Deep Neural Networks (DNN) lately, the recognition model essentially retains its generative interpretation.
Recent developments in deep learning have given rise to a powerful alternative – discriminative models called sequencetosequence models, can be trained to model the conditional probability distribution of the output transcript sequence given the input acoustic sequence, directly without inverting a generative model. Sequencetosequence models
[1, 2]are a general model family for solving supervised learning problems where both the inputs and the outputs are sequences. The performance of the original sequencetosequence model has been greatly improved by the invention of
soft attention [3], which made it possible for sequencetosequence models to generalize better and achieve excellent results using much smaller networks on long sequences. The sequencetosequence model with attention had considerable empirical success on machine translation [3], speech recognition [4, 5], image caption generation [6, 7], and question answering [8].Although remarkably successful, the sequencetosequence model with attention must process the entire input sequence before producing an output. However, there are tasks where it is useful to start producing outputs before the entire input is processed. These tasks include speech recognition, machine translation and simultaneous speech recognition and translation with one model [9].
Recently new models have been developed that overcome these shortcomings. These models, which we call online sequencetosequence models have the property that they produce outputs as inputs are received [10, 11], while retaining the causal nature of sequencetosequence models. In this paper, we use the model that we previously introduced in [11]^{1}^{1}1 We borrow text heavily from this prior paper to explain the motivation and several details about the model. This model uses binary stochastic variables to select the timesteps at which to produce outputs. We call this model the Neural Autoregressive Transducer (NAT). The stochastic variables are trained with a policy gradient method. However unlike the work by Luo et. al [11] we use a modified method of training that improves our training results. Further, we explore the use of this model for noisy input where we present single channel mixed speech from two speakers at different mixing proportions as input to the model. This models is uniquely suited for this task as it is a causal model, and as it is trained discriminatively. We show results of this model on a task we call MultiTIMIT it shows that the model is able to handle noisy speech quite well. We speculate that the use of this model with a multiple microphone arrangement should lead to strong results on mixed and noisy speech.
Ia Relation To Prior Work
Sequence to sequence models have been recently applied to phoneme recognition [12] and speech recognition [5]
. In these models, the input acoustics, in the form of log Mel filter banks are processed with an encoder neural network that is usually a bidirectional neural network. A decoder then produces output tokens one symbol at a time, using next step prediction. At each step, the decoder uses “soft attention” over the encoder time steps to create a “context vector” that is a summary of features of the encoder. The context vector is fed into the decoder and is used to make the prediction at any time step.
While the idea of soft attention as it is currently understood was first introduced by Graves [13], the first truly successful formulation of soft attention is due to Bahdanau et al. [3]
. It used a neural architecture that implements a “search query” that finds the most relevant element in the input, which it then picks out. Soft attention has quickly become the method of choice in various settings because it is easy to implement and it has led to state of the art results on various tasks. For example, the Neural Turing Machine
[14] and the Memory Network [15] both use an attention mechanism similar to that of Bahdanau et al. [3] to implement models for learning algorithms and for question answering.While soft attention is immensely flexible and easy to use, it assumes that the test sequence is provided in its entirety at test time. It is an inconvenient assumption whenever we wish to produce the relevant output as soon as possible, without processing the input sequence in its entirety first. Doing so is useful in the context of a speech recognition system that runs on a smartphone, and it is especially useful in a combined speech recognition and a machine translation system.
This model can be thought of extending two previous models – the Connectionist Temporal Classification (CTC) [16] and the Sequence Transducer [17] models that have been used for speech recognition previously. However, neither CTC nor the Sequence Transducer are causal models – both models compute features from the data independently at each time step, and this feature computation is unaffected by the tokens output previously. Note that while the language model RNN in the sequence transducer computes predictions causally, these do not impact the local class predictions made by the acoustics which are independent of each others and not causal.
There exists prior work that investigated causal models for producing an output without consuming the input in its entirety. These include the work by Mnih [18] and Zaremba and Sutskever [19] who used the Reinforce algorithm to learn the location in which to consume the input and when to emit an output. Finally, Jaitly et al. [10]
used an online sequencetosequence method with conditioning on partial inputs, which yielded encouraging results on the TIMIT dataset.
This work is technically an extension of our prior work in [11] where policy gradients with continuous rewards was used to train the model. In this paper, we use similar ideas, but instead of using a single sample REINFORCE model with a parameteric baseline for centering the training of the stochastic model, we use a multisample training, with a baseline that is an average over leaveoneout samples.
Further, in this paper, we explore the use of this model for noisy data – specifically noisy data that corresponds to speech from two different speakers mixed in at different levels.
Ii Methods
In this section we describe the details of the Autoregressive Sequence Transducer. This includes the recurrent neural network architecture, the reward function, and the training and inference procedure. Much of the description is borrowed heavily from our description in
[11]. We refer the reader to figure 1 for the details of the model.We begin by describing the probabilistic model we used in this work. At each time step, , a recurrent neural network (represented in figure 1) decides whether to emit an output token. The decision is made by a stochastic binary logistic unit . Let
be a Bernoulli distribution such that if
is 1, then the model outputs the vector , a softmax distribution over the set of possible tokens. The current position in the output sequence can be written , which is incremented by 1 every time the model chooses to emit. Then the model’s goal is to predict the desired output ; thus whenever , the model experiences a loss given bywhere ranges over the number of possible output tokens.
At each step of the RNN, the binary decision of the previous timestep, and the corresponding previous target are fed into the model as input. This feedback ensures that the model’s outputs are causally dependent on the model’s previous outputs, and thus the model is from the sequence to sequence family.
We train this model by estimating the gradient of the log probability of the target sequence with respect to the parameters of the model. While this model is not fully differentiable because it uses nondiffentiable binary stochastic units, we can estimate the gradients with respect to model parameters by using a policy gradient method, which has been discussed in detail by Schulman et al. [20] and used by Zaremba and Sutskever [19].
In more detail, we use supervised learning to train the network to make the correct output predictions, and reinforcement learning to train the network to decide on when to emit the various outputs. Let us assume that the input sequence is given by
and let the desired sequence be , where is a special endofsequence token, and where we assume that . Then the log probability of the model is given by the following equations:(1)  
(2)  
(3)  
(4)  
(5)  
(6)  
(7) 
In the above equations, is the “position” of the model in the output, which is always equal to : the position advances if and only if the model makes a prediction. Note that we define
to be a special beginningofsequence symbol. The above equations also suggest that our model can easily be implemented within a static graph in a neural net library such as TensorFlow
[21], even though the model has, conceptually, a dynamic neural network architecture.Following Zaremba and Sutskever [19], we modify the model from the above equations by forcing to be equal to 1 whenever . Doing so ensures that the model will be forced to predict the entire target sequence , and that it will not be able to learn the degenerate solution where it chooses to never make any prediction and therefore never experience any prediction error.
We now elaborate on the manner in which the gradient is computed. It is clear that for a given value of the binary decisions , we can compute
using the backpropagation algorithm. Figuring out how to learn
is slightly more challenging. To understand it, we will factor the reward into an expression and a distribution over the binary vectors, and derive a gradient estimate with respect to the parameters of the model:(8) 
Differentiating, we get
(9) 
where is the probability of a binary sequence of the decision variables. In our model,
is computed using the chain rule over the
probabilities:(10) 
Since the gradient in equation 9
is a policy gradient, it has very high variance, and variance reduction techniques must be applied. As is common in such problems we use
centering (also known as baselines) and RaoBlackwellization to reduce the variance of such models. See Mnih and Gregor [22] for an example of the use of such techniques in training generative models with stochastic units.Baselines are commonly used in the reinforcement learning literature to reduce the variance of estimators, by relying on the identity . Thus the gradient in 9 can be better estimated by the following, through the use of a well chosen baseline function, , where is a vector of side information which happens to be the input and all the outputs up to timestep :
(11) 
The variance of this estimator itself can be further reduced by RaoBlackwellization, giving:
(12) 
This above term, while not computable analytically, can be estimated numerically by drawing sample trajectories, indexed by . Thus we have an estimate of the gradient as follows:
(13) 
where, the superscript of indicates the sample index. In previous work [11] we used a single sample estimate (i.e. ) and a neural network as a parametric baseline to estimate . This was computed using a linear projection of the hidden state of the top LSTM layer of the RNN, i.e. , where is a vector and is a bias.
Recent work in reinforcement learning and variational methods has shown the advantage of multisample estimates [23, 24]. In this paper, we thus explore the use of a multisample estimate, with . Further, as in [24] we used a baseline with a leave one out average, which we explain next.
A straightforward choice of this baseline, is the average sum of future rewards from the other samples
however, this ignores the fact that the internal state of the different samples are not the same. Ideally we would average over multiple trajectories starting from the same state (i.e. number of inputs consumed and outputs produced), but this is computationally expensive. As a result there is an imbalance where some of the samples have emitted more symbols than the others, and thus the future rewards may not be directly comparable. We add a residual term to address this,
(14) 
We call this the leaveoneout baseline.
Finally, we note that reinforcement learning models are often trained with augmented objectives that add an entropy penalty for actions are the too confident [25, 26]. We found this to be crucial for our models to train successfully. In light of the regularization term, the augmented reward at any time steps, , is:
(15) 
Without the use of this regularization in the model, the RNN emits all the symbols clustered in time, either at very start of the input sequence, or at the end. The model has a difficult time recovering from this configuration, since the gradients are too noisy and biased. However, with the use of this penalty, the model successfully navigates away from parameters that lead to very clustered predictions and eventually learns sensible parameters. An alternative we explored was to use the the KL divergence of the predictions from a target Bernouilli rate of emission at every step. However, while this helped the model, it was not as successful as entropy regularization. See figure 2 for an example of this clustering problem and how regularization ameliorates it.
Iii Experiments and Results
We conducted experiments on two different speech corpora using this model. Initial experiments were conducted on TIMIT to assess hyperparameters that could lead to stable behavior of the model. The second set of experiments were conducted on speech mixed in from two different speakers – a male speaker and a female speaker – at different mixing proportions. We call these experiments MultiTIMIT.
Iiia Timit
The TIMIT data set is a phoneme recognition task in which phoneme sequences have to be inferred from input audio utterances. The training dataset contains 3696 different audio clips and the target is one of 60 phonemes. Before scoring, these are collapsed to a standard 39 phoneme set, and then the Levenshtein edit distance is computed to get the phoneme error rate (PER).
The models we trained on TIMIT had two layers with 256 units per layer. Each model was trained with Adam( [27]) and used a learning rate of . We used asynchronous SGD with 16 replicas in tensorflow as the neural network framework for training the models [21, 28]. No GPUs were used in the training.
Entropy regularization was crucial to produce the best results, with emissions clumping at the end or beginning of the utterances when an entropy penalty was not used. We started the weight of the entropy penalty at 1 and decayed it linearly to 0.1. We began decaying the entropy penalty at 10,000 steps and experimented with ending the decay at {100,000, 200,000, 300,000, 400,000} steps, finding that step 200,000 worked best. After step 200,000 the entropy penalty weight was kept at 0.1.
We also regularized our models with variational weight noise [29]
. We tested the values {0.075, 0.1, 0.15} for the standard deviation of the noise and found that 0.15 worked best. We started the standard deviation of the variational noise at 0, and increased it linearly from step 10,000 to a value of 0.15 at step 200,000. In each experiment the entropy penalty stopped decaying on the same step that the variational noise finished increasing.
We also used L2norm weight regularization to encourage small weights. We found that a weight of 0.001 worked best after trying weights .
Lastly, we note that the input filterbanks were processed such that three continuous frames of filterbanks, representing a total of 30ms of speech were concatenated and input to the model. This results in a smaller number of input steps and allows the model to learn hard alignments much faster than it would otherwise.
See Figure 2(a) for an example of a training curve. It can be seen that the model requires a larger number of updates ( 100K) before meaningful models are learnt. However, once learning starts, steady process is achieved, even though the model is trained by policy gradient.
Table I shows a summary of the results achieved on TIMIT by our method and other, more mature models. As can be seen our model compares favorably with other unidirectional models, such as CTC, DNNHMM’s etc. Combining with more sophisticated features such as convolutional models should produce better results. Moreover, this model has the capacity to absorb language models, and as a result, should be more suited to end to end training than CTC and DNNHMM based models that cannot inherently capture language models because they predict all tokens independently of each other.
Method  PER 

CTC[30]  19.6% 
DNNHMM[31]  20.7% 
seq2seq with attention (our implementation)  24.5% 
neural transducer[10]  19.8% 
NAT (Stacked LSTM) + Parameteric Baseline[11]  21.5% 
NAT (Grid LSTM) + Parameteric Baseline[11]  20.5% 
NAT (Stacked LSTM) + Averaging Baseline (this paper)  20.0% 
IiiB MultiTIMIT
We generate a new data set by mixing a male voice with a female voice from the original TIMIT data. Each utterance in the original TIMIT data pairs with an utterance coming from the opposite gender. The wave signal of both utterances are first scaled to the same range, and then the signal scale of the second utterance is reduced to a smaller volume when mixing the two utterances. We explored different scale for mixing the second utterance, 50%, 25%, and 10%, and created three sets of experiments. The same feature generation method that was described above was used, resulting in a 123 dimensional input per frame. The transcript of the speaker 1 was used as the ground truth transcript for this new utterance. This data follow the same train, dev, and test specification as TIMIT. As a result the mixed data has the same number of train, dev, and test utterances as the original TIMIT, and they also have the same sets of target phonemes.
Our model was a 2layer LSTM with 256 units in each layer. The same hyperparameter search strategy that was used for clean TIMIT (section IIIA) was applied here.
Mixing Proportion  NAT  CTC  RNNTransducer 

0.1  25.9%  27.3%  25.7% 
0.25  32.5%  33.3%  32.2% 
0.5  42.9%  43.8%  48.9% 
Figures 2(b) and 2(c) show examples of training curves for two cases with mixing proportions of 0.25 and 0.5 respectively. In both cases it can be seen that the model learns to overfit the data.
Table II shows results from using different mixing proportions of confounding speaker. It can be seen that with increasing mixing proportion, the model’s results get worse as expected. For the experiments, each audio input is always paired with the same confounding audio input. Interestingly we found that pairing the same audio with multiple confounding audio inputs produced worse results, because of much worse overfitting. This presumably happens because our model is powerful enough to memorize the entire transcripts.
Figure 5 shows an example of where the model emits symbols for an example MultiTIMIT utterance. It also shows a comparison with the emissions from a clean model. Generally speaking the model chooses to emit later for MultiTIMIT compared to when it emits for TIMIT.
Iv Discussion
In this paper we have introduced a new way to train online sequencetosequence models and showed its application to noisy input. These models, as a result of being causal models, can incorporate language models, and can also generate multiple different transcripts for the same audio input. This makes it a very powerful class of models. Even on a dataset as small as TIMIT the model is able to adapt to mixed speech. For our experiments each speaker was only coupled to one distracting speaker and hence the dataset size was limited. By pairing each speaker with multiple other speakers, and predicting each one as outputs, we should be able to achieve greater robustness. Because of this capability, we would like to apply these models to multichannel, multispeaker recognition in the future.
V Conclusions
In this work, we presented a new way of training an online sequence to sequence model. This model allows us to exploit the modelling power of sequencetosequence problems without the need to process the entire input sequence first. We show the results of training this model on the TIMIT corpus and acheived results comparable to state of the art results with unidirectional models.
We also applied this model to the task of mixed speech from two speakers, producing the output for the louder speaker. We show that the model is able to achieve reasonble accuracy, even with single channel input. In the future, we will apply this work to multispeaker recognition.
References
 [1] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to Sequence Learning with Neural Networks,” in Neural Information Processing Systems, 2014.

[2]
K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwen,
and Y. Bengio, “Learning Phrase Representations using RNN EncoderDecoder
for Statistical Machine Translation,” in
Conference on Empirical Methods in Natural Language Processing
, 2014. 
[3]
D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation by Jointly Learning to Align and Translate,” in
International Conference on Learning Representations, 2015.  [4] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio, “Endtoend Continuous Speech Recognition using Attentionbased Recurrent NN: First Results,” in Neural Information Processing Systems: Workshop Deep Learning and Representation Learning Workshop, 2014.
 [5] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, “Listen, attend and spell,” arXiv preprint arXiv:1508.01211, 2015.

[6]
K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and
Y. Bengio, “Show, Attend and Tell: Neural Image Caption Generation with
Visual Attention,” in
International Conference on Machine Learning
, 2015. 
[7]
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and Tell: A Neural Image Caption Generator,” in
IEEE Conference on Computer Vision and Pattern Recognition
, 2015.  [8] J. Weston, S. Chopra, and A. Bordes, “Memory networks,” arXiv preprint arXiv:1410.3916, 2014.
 [9] R. J. Weiss, J. Chorowski, N. Jaitly, Y. Wu, and Z. Chen, “Sequencetosequence models can directly transcribe foreign speech,” CoRR, vol. abs/1703.08581, 2017. [Online]. Available: http://arxiv.org/abs/1703.08581
 [10] N. Jaitly, D. Sussillo, Q. V. Le, O. Vinyals, I. Sutskever, and S. Bengio, “A neural transducer,” in Advances in Neural Information Processing Systems, 2016. [Online]. Available: https://arxiv.org/abs/1511.04868
 [11] Y. Luo, C.C. Chiu, N. Jaitly, and I. Sutskever, “Learning online alignments with continuous rewards policy gradient,” arXiv preprint arXiv:1608.01281, 2016.
 [12] J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “AttentionBased Models for Speech Recognition,” in Neural Information Processing Systems, 2015.
 [13] A. Graves, “Generating sequences with recurrent neural networks,” arXiv preprint arXiv:1308.0850, 2013.
 [14] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” arXiv preprint arXiv:1410.5401, 2014.
 [15] S. Sukhbaatar, J. Weston, R. Fergus et al., “Endtoend memory networks,” in Advances in Neural Information Processing Systems, 2015, pp. 2431–2439.
 [16] A. Graves and N. Jaitly, “Towards EndtoEnd Speech Recognition with Recurrent Neural Networks,” in International Conference on Machine Learning, 2014.
 [17] A. Graves, “Sequence Transduction with Recurrent Neural Networks,” in International Conference on Machine Learning: Representation Learning Workshop, 2012.
 [18] V. Mnih, N. Heess, A. Graves et al., “Recurrent models of visual attention,” in Advances in Neural Information Processing Systems, 2014, pp. 2204–2212.
 [19] W. Zaremba and I. Sutskever, “Reinforcement learning neural turing machines,” arXiv preprint arXiv:1505.00521, 2015.
 [20] J. Schulman, N. Heess, T. Weber, and P. Abbeel, “Gradient estimation using stochastic computation graphs,” in Advances in Neural Information Processing Systems, 2015, pp. 3510–3522.
 [21] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin et al., “Tensorflow: Largescale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2016.
 [22] A. Mnih and K. Gregor, “Neural variational inference and learning in belief networks,” CoRR, vol. abs/1402.0030, 2014. [Online]. Available: http://arxiv.org/abs/1402.0030
 [23] Y. Burda, R. Grosse, and R. Salakhutdinov, “Importance weighted autoencoders,” arXiv preprint arXiv:1509.00519, 2015.
 [24] A. Mnih and D. J. Rezende, “Variational inference for monte carlo objectives,” arXiv preprint arXiv:1602.06725, 2016.
 [25] S. Levine, “Motor skill learning with local trajectory methods,” Ph.D. dissertation, Stanford University, 2014.
 [26] R. J. Williams, “Simple statistical gradientfollowing algorithms for connectionist reinforcement learning,” Machine learning, vol. 8, no. 34, pp. 229–256, 1992.
 [27] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 [28] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng, “Large Scale Distributed Deep Networks,” in Neural Information Processing Systems, 2012.
 [29] A. Graves, “Practical variational inference for neural networks,” in Advances in Neural Information Processing Systems, 2011, pp. 2348–2356.
 [30] A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. IEEE, 2013, pp. 6645–6649.

[31]
A. Mohamed, G. E. Dahl, and G. Hinton, “Acoustic modeling using deep belief networks,”
IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 14–22, 2012.