Learning Online Alignments with Continuous Rewards Policy Gradient

08/03/2016 ∙ by Yuping Luo, et al. ∙ Google OpenAI 0

Sequence-to-sequence models with soft attention had significant success in machine translation, speech recognition, and question answering. Though capable and easy to use, they require that the entirety of the input sequence is available at the beginning of inference, an assumption that is not valid for instantaneous translation and speech recognition. To address this problem, we present a new method for solving sequence-to-sequence problems using hard online alignments instead of soft offline alignments. The online alignments model is able to start producing outputs without the need to first process the entire input sequence. A highly accurate online sequence-to-sequence model is useful because it can be used to build an accurate voice-based instantaneous translator. Our model uses hard binary stochastic decisions to select the timesteps at which outputs will be produced. The model is trained to produce these stochastic decisions using a standard policy gradient method. In our experiments, we show that this model achieves encouraging performance on TIMIT and Wall Street Journal (WSJ) speech recognition datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

Code Repositories

MLKX

Machine Learning Knowledge Exchange


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sequence-to-sequence models sutskever-nips-2014 ; cho-emnlp-2014

are a general model family for solving supervised learning problems where both the inputs and the outputs are sequences. The performance of the original sequence-to-sequence model has been greatly improved by the invention of

soft attention bahdanau-iclr-2015 , which made it possible for sequence-to-sequence models to generalize better and achieve excellent results using much smaller networks on long sequences. The sequence-to-sequence model with attention had considerable empirical success on machine translation bahdanau-iclr-2015 , speech recognition chorowski-nips-2014 ; chan2015listen , image caption generation xu-icml-2015 ; vinyals-arvix-2014 , and question answering weston2014memory .

Although remarkably successful, the sequence-to-sequence model with attention must process the entire input sequence before producing an output. However, there are tasks where it is useful to start producing outputs before the entire input is processed. These tasks include both speech recognition and machine translation, especially because a good online speech recognition system and a good online translation system can be combined to produce a voice-based instantaneous translator (also known as a Babel Fish adams1995hitch ), which is an important application.

In this work, we present a simple online sequence-to-sequence model that uses binary stochastic variables to select the timesteps at which to produce outputs. The stochastic variables are trained with a policy gradient method (similarly to Mnih et al. mnih2014recurrent and Zaremba and Sutskever zaremba2015reinforcement ). Despite its simplicity, this method achieves encouraging results on the TIMIT and the Wall Street Journal speech recognition datasets. Our results suggest that a larger scale version of the model will likely achieve state-of-the-art results on many sequence-to-sequence problems.

1.1 Relation To Prior Work

While the idea of soft attention as it is currently understood was first introduced by Graves graves2013generating , the first truly successful formulation of soft attention is due to Bahdanau et al. bahdanau-iclr-2015

. It used a neural architecture that implements a “search query” that finds the most relevant element in the input, which it then picks out. Soft attention has quickly become the method of choice in various settings because it is easy to implement and it has led to state of the art results on various tasks. For example, the Neural Turing Machine

graves2014neural and the Memory Network sukhbaatar2015end both use an attention mechanism similar to that of Bahdanau et al. bahdanau-iclr-2015 to implement models for learning algorithms and for question answering.

While soft attention is immensely flexible and easy to use, it assumes that the test sequence is provided in its entirety at test time. It is an inconvenient assumption whenever we wish to produce the relevant output as soon as possible, without processing the input sequence in its entirety first. Doing so is useful in the context of a speech recognition system that runs on a smartphone, and it is especially useful in a combined speech recognition and a machine translation system.

There exists prior work that investigated methods for producing an output without consuming the input in its entirety. These include the work by Mnih mnih2014recurrent and Zaremba and Sutskever zaremba2015reinforcement who used the Reinforce algorithm to learn the location in which to consume the input and when to emit an output. Finally, Jaitly et al. jaitly2015online used an online sequence-to-sequence method with conditioning on partial inputs, which yielded encouraging results on the TIMIT dataset. Our work is most similar to Zaremba and Sutskever zaremba2015reinforcement . However, we are able to simplify the learning problem for the policy gradient component of the algorithm by using only one stochastic decision per time step, which makes the model much more effective in practice.

2 Methods

Figure 1: Overall Architecture of the model used in this paper.

In this section we describe the details of our recurrent neural network architecture, the reward function, and the training and inference procedure. We refer the reader to figure 

1 for the details of the model.

We begin by describing the probabilistic model we used in this work. At each time step,

, a recurrent neural network (represented in figure 1) decides whether to emit an output token. The decision is made by a stochastic binary logistic unit

. Let

be a Bernoulli distribution such that if

is 1, then the model outputs the vector

, a softmax distribution over the set of possible tokens. The current position in the output sequence can be written , which is incremented by 1 every time the model chooses to emit. Then the model’s goal is to predict the desired output ; thus whenever , the model experiences a loss given by

where ranges over the number of possible output tokens.

At each step of the RNN, the binary decision of the previous timestep, and the previous target are fed into the model as input. This feedback ensures that the model’s outputs are maximally dependent and thus the model is from the sequence to sequence family.

We train this model by estimating the gradient of the log probability of the target sequence with respect to the parameters of the model. While this model is not fully differentiable because it uses non-diffentiable binary stochastic units, we can estimate the gradients with respect to model parameters by using a policy gradient method, which has been discussed in detail by Schulman et al. 

schulman2015gradient and used by Zaremba and Sutskever zaremba2015reinforcement .

In more detail, we use supervised learning to train the network to make the correct output predictions, and reinforcement learning to train the network to decide on when to emit the various outputs. Let us assume that the input sequence is given by

and let the desired sequence be , where is a special end-of-sequence token, and where we assume that . Then the log probability of the model is given by the following equations:

(1)
(2)
(3)
(4)
(5)
(6)
(7)

In the above equations, is the “position” of the model in the output, which is always equal to : the position advances if and only if the model makes a prediction. Note that we define

to be a special beginning-of-sequence symbol. The above equations also suggest that our model can easily be implemented within a static graph in a neural net library such as TensorFlow, even though the model has, conceptually, a dynamic neural network architecture.

Following Zaremba and Sutskever zaremba2015reinforcement , we modify the model from the above equations by forcing to be equal to 1 whenever . Doing so ensures that the model will be forced to predict the entire target sequence , and that it will not be able to learn the degenerate solution where it chooses to never make any prediction and therefore never experience any prediction error.

Figure 2: The impact of entropy regularization on emission locations. Each line shows the emission predictions made for an example input utterance, with each symbol representing 3 input time steps. ’x’ indicates that the model chooses to emit output at the time steps, whereas ’-’ indicates otherwise. Top line - without entropy penalty the model emits symbols either at the start or at the end of the input, and is unable to get meaningful gradients to learn a model. Middle line - with entropy regularization, the model avoids clustering emission predictions in time and learns to spread the emissions meaningfully and learn a model. Bottom line - using KL divergence regularization of emission probability also mitigates the clustering problem, albeit not as effectively as with entropy regularization.

We now elaborate on the manner in which the gradient is computed. It is clear that for a given value of the binary decisions , we can compute

using the backpropagation algorithm. Figuring out how to learn

is slightly more challenging. To understand it, we will factor the reward into an expression and a distribution over the binary vectors, and derive a gradient estimate with respect to the parameters of the model:

(8)

Differentiating, we get

(9)

where is the probability of a binary sequence of the decision variables. In our model,

is computed using the chain rule over the

probabilities:

(10)

Since the gradient in equation 9

is a policy gradient, it has very high variance, and variance reduction techniques must be applied. As is common in such problems we use

centering (also known as baselines) and Rao-Blackwellization to reduce the variance of such models. See Mnih and Gregor anvil for an example of the use of such techniques in training generative models with stochastic units.

Baselines are commonly used in the reinforcement learning literature to reduce the variance of estimators, by relying on the identity . Thus the gradient in 9 can be better estimated by the following, through the use of a well chosen baseline function, , where is a vector of side information which happens to be the input and all the outputs up to timestep :

(11)

The variance of this estimator itself can be further reduced by Rao-Blackwellization, giving:

(12)

Finally, we note that reinforcement learning models are often trained with augmented objectives that add an entropy penalty for actions are the too confident levine2014motor ; williams1992simple . We found this to be crucial for our models to train successfully. In light of the regularization term, the augmented reward at any time steps, , is:

Without the use of this regularization in the model, the RNN emits all the symbols clustered in time, either at very start of the input sequence, or at the end. The model has a difficult time recovering from this configuration, since the gradients are too noisy and biased. However, with the use of this penalty, the model successfully navigates away from parameters that lead to very clustered predictions and eventually learns sensible parameters. An alternative we explored was to use the the KL divergence of the predictions from a target Bernouilli rate of emission at every step. However, while this helped the model, it was not as successful as entropy regularization. See figure 2 for an example of this clustering problem and how regularization ameliorates it.

3 Experiments and Results

Figure 3: Example training run on TIMIT.

We conducted experiments on two different speech corpora using this model. Initial experiments were conducted on TIMIT to assess hyperparameters that could lead to stable behavior of the model. The second set of experiments were conducted on the Wall Street Journal corpus to assess if the method worked on a large vocabulary speech recognition task that is much more realistic and complicated than the TIMIT phoneme recognition task. While our experiments on TIMIT produced numbers close to the state of the art, our results on WSJ are only a preliminary demonstation that this method indeed works on such as task. Further hyperparameter tuning and method development should improve results on this task significantly.

3.1 Timit

The TIMIT data set is a phoneme recognition task in which phoneme sequences have to be inferred from input audio utterances. The training dataset contains 3696 different audio clips and the target is one of 60 phonemes. Before scoring, these are collapsed to a standard 39 phoneme set, and then the Levenshtein edit distance is computed to get the phoneme error rate (PER).

We trained models with various number of layers on TIMIT, starting with a small number of layers. Initially, we achieved 28% phoneme error rate (PER) using a three layer LSTM model with 300 units in each layer. During these experiments we found that using a weight of for entropy regularization seemed to produce best results. Further it was crucial to decay this parameter as learning proceeded to allow the model to sharpen its predictions, once enough learning signal was available. To do this, the entropy penalty was initialized to 1, and decayed as . Results were further improved to 23% with the use of dropout with 15% of the units being dropped out. Results were improved further when we used five layers of units. Best results were acheived through the use of Grid LSTMs kalchbrenner2015grid , rather than stacked LSTMs.

See figure 3 for an example of a training curve. It can be seen that the model requires a larger number of updates (>100K) before meaningful models are learnt. However, once learning starts, steady process is achieved, even though the model is trained by policy gradient.

Training of the models was done using Asynchronous Gradient Descent with 20 replicas in Tensorflow abadi2016tensorflow . Training was much more stable when Adam was used, compared to SGD, although results were more or less the same when both models were run to convergence. We used a learning rate of 1e-4 with Adam. In order to speed up RNN training we also bucketed examples by length – each replica used only examples whose length lay within specific ranges. During training, dropout rate was increased from 0 as the training proceeded. This is because using dropout early in the training prevented the model from latching on to any training signal.

Lastly, we note that the input filterbanks were processed such that three continuous frames of filterbanks, representing a total of 30ms of speech were concatenated and input to the model. This results in a smaller number of input steps and allows the model to learn hard alignments much faster than it would otherwise.

Table 2 shows a summary of the results achieved on TIMIT by our method and other, more mature models.

Method PER
Connectionist Temporal Classification (CTC)graves2013speech 19.6%

Deep Neural Network - Hidden Markov Model (DNN-HMM)

mohamed2012acoustic
20.7%
Sequence to Sequence Model With Attention (our implementation) 24.5%
Online Sequence to Sequence Modeljaitly2015online 19.8%
Our Model (Stacked LSTM) 21.5%
Our Model (Grid LSTM) 20.5%
Table 1: Results on TIMIT using Unidirectional LSTMs for various models.

3.2 Wall Street Journal

Method PER
Connectionist Temporal Classification (CTC)(4 layer bidirectional LSTM)graves-icml-2014 27.3 %
Sequence to Sequence Model With Attention (4 layer bidirectional GRU)BahdanauCSBB15 18.6%
Our Model (4 layer unidirectional LSTM) 27.0%
Table 2: Results on WSJ

We used the train_si284 dataset of the Wall Street Journal (WSJ) corpus for the second experiment. This dataset consists of more than thirty seven thousand utterances, corresponding to around 81 hours of audio signals. We trained our model to predict the character seqeunces directly, without the use of pronounciation dictionaries, or language models, from the audio signal. Since WSJ is a larger corpus we used 50 replicas for the AsyncSGD training. Each utterance was speaker mean centered, as is standard for this dataset. Similar to the TIMIT setup above, we concatenated three continous frames of filterbanks, representing a total of 30ms of speech as input to the model at each time step. This is especially useful for WSJ dataset because its audio clips are typically much longer than those of TIMIT.

Figure 4: Example training run on WSJ. The pl.

A constant entropy penalty of one was used for the first 200,000 steps, and then it was decayed as . Stacked LSTMs with 2 layers of 300 hidden units were used for this experiment111Admittedly this is a small number of units and results should improve with the use of a larger model. However, as a proof of concept it shows that the model can be trained to give reasonable accuracy.. Gradients were clipped to a maximum norm of 30 for these experiments.

It was seen that if dropout was used early in the training, the model that was unable to learn. Thus dropout was used only much later in the training. Other differences from the TIMIT experiments included the observation that stacked model outperformed the grid LSTM model.

Figure 5: Example output for an utterance in WSJ. The blue line shows the emission probability , while the red line shows the discrete emission decisions, , over the time steps corresponding to an input utterance. The bottom panel shows the corresponding filterbanks. It can be seen that the model often decides to emit symbols only when new audio comes in. It possibly reflects the realization of the network, that it needs to output symbols that it has heard, to effectively process the new audio.

3.2.1 Example transcripts

We show three example transcripts to give a flavour for the kinds of outputs this model is able to produce. The first one is an example of a transcript that made several errors. It can be seen however that the outputs have close phonetic similarity to the actual transcript. The second example is of an utterance that was transcribed almost entirely correctly, other than the word AND being substituted by END. Occasionally the model is even able to transcribe a full utterance entirely correctly, as in the third example below.

REF: ONE LONGTIME EASTERN PILOT INSISTED THAT THE SAFETY CAMPAIGN INVOLVED NUMEROUS SERIOUS PROBLEMS BUT AFFIRMED THAT THE CARDS OFTEN CONTAINED INSUFFICIENT INFORMATION FOR REGULATORS TO ACT ON
HYP: ONE LONGTIME EASTERN PILOT INSISTED THAT THE SAFETY CAMPAIGN INVOLVED NEW MERCE SERIOUS PROBLEMS BUT AT FIRM THAT THE CARDS OFTEN CONTAINED IN SECURITION INFORMATION FOR REGULATORS TO ACT
REF: THE COMPANY IS OPENING SEVEN FACTORIES IN ASIA THIS YEAR AND NEXT
HYP: THE COMPANY IS OPENING SEVEN FACTORIES IN ASIA THIS YEAR END NEXT
REF: HE SAID HE AND HIS FATHER J WADE KINCAID WHO IS CHAIRMAN OWN A TOTAL OF ABOUT SIX POINT FOUR PERCENT OF THE COMPANYS COMMON
HST: HE SAID HE AND HIS FATHER J WADE KINCAID WHO IS CHAIRMAN OWN A TOTAL OF ABOUT SIX POINT FOUR PERCENT OF THE COMPANYS COMMON

3.2.2 Example Emissions

Figure 5 shows a plot of the emission probabilities produced as the input audio is processed. Interestingly, the model produces the final words, only at the end of the input sequence. Presumably this happens because no new audio comes in after half way through the utterance, and the model has no need to clear its internal memory to process new information.

4 Conclusions

In this work, we presented a simple model that can solve sequence-to-sequence problems without the need to process the entire input sequence first. Our model directly maximizes the log probability of the correct answer by combining standard supervised backpropagation and a policy gradient method.

Despite its simplicity, our model achieved encouraging results on a small scale and a medium scale speech recognition task. We hope that by scaling up the model, it will achieve near state-of-the-art results on speech recognition and on machine translation, which will in turn will enable the construction of the universal instantaneous translator.

Our results also suggest that policy gradient methods are reasonably powerful, and that they can train highly complex neural networks that learn to make nontrivial stochastic decisions.

5 Acknowledgements

We would like to thank Dieterich Lawson for his helpful comments on the manuscript.

References