Automatic Speech Recognition on the Digital Archive of the Southern Speech
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates deep recurrent neural networks, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7 our knowledge is the best recorded score.READ FULL TEXT VIEW PDF
Long Short-Term Memory (LSTM) is a recurrent neural network (RNN)
Many machine learning tasks can be expressed as the transformation---or
Recurrent Neural Networks are powerful machine learning frameworks that ...
Connectionist temporal classification (CTC) based supervised sequence
Recurrent neural networks (RNNs) are state-of-the-art in voice
This paper presents and benchmarks a number of end-to-end Deep Learning ...
Offline handwriting recognition systems require cropped text line images...
Automatic Speech Recognition on the Digital Archive of the Southern Speech
Generating text in the style of Edgar Allen Poe using stacked LSTM's
using a neural network for prediction of German phones in audio files (MFCC features)
Neural networks have a long history in speech recognition, usually in combination with hidden Markov models[1, 2]. They have gained attention in recent years with the dramatic improvements in acoustic modelling yielded by deep feedforward networks [3, 4]. Given that speech is an inherently dynamic process, it seems natural to consider recurrent neural networks (RNNs) as an alternative model. HMM-RNN systems  have also seen a recent revival [6, 7], but do not currently perform as well as deep networks.
Instead of combining RNNs with HMMs, it is possible to train RNNs ‘end-to-end’ for speech recognition [8, 9, 10]. This approach exploits the larger state-space and richer dynamics of RNNs compared to HMMs, and avoids the problem of using potentially incorrect alignments as training targets. The combination of Long Short-term Memory , an RNN architecture with an improved memory, with end-to-end training has proved especially effective for cursive handwriting recognition [12, 13]. However it has so far made little impact on speech recognition.
RNNs are inherently deep in time, since their hidden state is a function of all previous hidden states. The question that inspired this paper was whether RNNs could also benefit from depth in space; that is from stacking multiple recurrent hidden layers on top of each other, just as feedforward layers are stacked in conventional deep networks. To answer this question we introduce deep Long Short-term Memory RNNs and assess their potential for speech recognition. We also present an enhancement to a recently introduced end-to-end learning method that jointly trains two separate RNNs as acoustic and linguistic models . Sections 2 and 3 describe the network architectures and training methods, Section 4 provides experimental results and concluding remarks are given in Section 5.
Given an input sequence
, a standard recurrent neural network (RNN) computes the hidden vector sequenceand output vector sequence by iterating the following equations from to :
where the terms denote weight matrices (e.g. is the input-hidden weight matrix), the
terms denote bias vectors (e.g.is hidden bias vector) and is the hidden layer function.
is usually an elementwise application of a sigmoid function. However we have found that the Long Short-Term Memory (LSTM) architecture, which uses purpose-built memory cells to store information, is better at finding and exploiting long range context. Fig. 1 illustrates a single LSTM memory cell. For the version of LSTM used in this paper  is implemented by the following composite function:
where is the logistic sigmoid function, and , , and are respectively the input gate, forget gate, output gate and cell activation vectors, all of which are the same size as the hidden vector . The weight matrices from the cell to gate vectors (e.g. ) are diagonal, so element in each gate vector only receives input from element of the cell vector.
One shortcoming of conventional RNNs is that they are only able to make use of previous context. In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well. Bidirectional RNNs (BRNNs)  do this by processing the data in both directions with two separate hidden layers, which are then fed forwards to the same output layer. As illustrated in Fig. 2, a BRNN computes the forward hidden sequence , the backward hidden sequence and the output sequence by iterating the backward layer from to , the forward layer from to and then updating the output layer:
Combing BRNNs with LSTM gives bidirectional LSTM , which can access long-range context in both input directions.
A crucial element of the recent success of hybrid HMM-neural network systems is the use of deep architectures, which are able to build up progressively higher level representations of acoustic data. Deep RNNs can be created by stacking multiple RNN hidden layers on top of each other, with the output sequence of one layer forming the input sequence for the next. Assuming the same hidden layer function is used for all layers in the stack, the hidden vector sequences are iteratively computed from to and to :
where we define . The network outputs are
Deep bidirectional RNNs can be implemented by replacing each hidden sequence with the forward and backward sequences and , and ensuring that every hidden layer receives input from both the forward and backward layers at the level below. If LSTM is used for the hidden layers we get deep bidirectional LSTM, the main architecture used in this paper. As far as we are aware this is the first time deep LSTM has been applied to speech recognition, and we find that it yields a dramatic improvement over single-layer LSTM.
We focus on end-to-end training, where RNNs learn to map directly from acoustic to phonetic sequences. One advantage of this approach is that it removes the need for a predefined (and error-prone) alignment to create the training targets. The first step is to to use the network outputs to parameterise a differentiable distribution over all possible phonetic output sequences given an acoustic input sequence
. The log-probability
of the target output sequence
can then be differentiated with respect to the network weights using backpropagation through time, and the whole system can be optimised with gradient descent. We now describe two ways to define the output distribution and hence train the network. We refer throughout to the length of as , the length of as , and the number of possible phonemes as .
, uses a softmax layer to define a separate output distributionat every step along the input sequence. This distribution covers the phonemes plus an extra blank symbol which represents a non-output (the softmax layer is therefore size ). Intuitively the network decides whether to emit any label, or no label, at every timestep. Taken together these decisions define a distribution over alignments between the input and target sequences. CTC then uses a forward-backward algorithm to sum over all possible alignments and determine the normalised probability of the target sequence given the input sequence . Similar procedures have been used elsewhere in speech and handwriting recognition to integrate out over possible segmentations [18, 19]; however CTC differs in that it ignores segmentation altogether and sums over single-timestep label decisions instead.
RNNs trained with CTC are generally bidirectional, to ensure that every depends on the entire input sequence, and not just the inputs up to . In this work we focus on deep bidirectional networks, with defined as follows:
where is the element of the length unnormalised output vector , and is the number of bidirectional levels.
CTC defines a distribution over phoneme sequences that depends only on the acoustic input sequence . It is therefore an acoustic-only model. A recent augmentation, known as an RNN transducer  combines a CTC-like network with a separate RNN that predicts each phoneme given the previous ones, thereby yielding a jointly trained acoustic and language model. Joint LM-acoustic training has proved beneficial in the past for speech recognition [20, 21].
Whereas CTC determines an output distribution at every input timestep, an RNN transducer determines a separate distribution for every combination of input timestep and output timestep . As with CTC, each distribution covers the phonemes plus . Intuitively the network ‘decides’ what to output depending both on where it is in the input sequence and the outputs it has already emitted. For a length target sequence , the complete set of decisions jointly determines a distribution over all possible alignments between and , which can then be integrated out with a forward-backward algorithm to determine .
In the original formulation was defined by taking an ‘acoustic’ distribution from the CTC network, a ‘linguistic’ distribution from the prediction network, then multiplying the two together and renormalising. An improvement introduced in this paper is to instead feed the hidden activations of both networks into a separate feedforward output network, whose outputs are then normalised with a softmax function to yield . This allows a richer set of possibilities for combining linguistic and acoustic information, and appears to lead to better generalisation. In particular we have found that the number of deletion errors encountered during decoding is reduced.
Denote by and the uppermost forward and backward hidden sequences of the CTC network, and by the hidden sequence of the prediction network. At each the output network is implemented by feeding and to a linear layer to generate the vector , then feeding and to a hidden layer to yield , and finally feeding to a size softmax layer to determine :
where is the element of the length unnormalised output vector. For simplicity we constrained all non-output layers to be the same size (; however they could be varied independently.
RNN transducers can be trained from random initial weights. However they appear to work better when initialised with the weights of a pretrained CTC network and a pretrained next-step prediction network (so that only the output network starts from random weights). The output layers (and all associated weights) used by the networks during pretraining are removed during retraining. In this work we pretrain the prediction network on the phonetic transcriptions of the audio training data; however for large-scale applications it would make more sense to pretrain on a separate text corpus.
RNN transducers can be decoded with beam search  to yield an n-best list of candidate transcriptions. In the past CTC networks have been decoded using either a form of best-first decoding known as prefix search, or by simply taking the most active output at every timestep . In this work however we exploit the same beam search as the transducer, with the modification that the output label probabilities do not depend on the previous outputs (so ). We find beam search both faster and more effective than prefix search for CTC. Note the n-best list from the transducer was originally sorted by the length normalised log-probabilty ; in the current work we dispense with the normalisation (which only helps when there are many more deletions than insertions) and sort by .
Regularisation is vital for good performance with RNNs, as their flexibility makes them prone to overfitting. Two regularisers were used in this paper: early stopping and weight noise (the addition of Gaussian noise to the network weights during training ). Weight noise was added once per training sequence, rather than at every timestep. Weight noise tends to ‘simplify’ neural networks, in the sense of reducing the amount of information required to transmit the parameters [23, 24], which improves generalisation.
Phoneme recognition experiments were performed on the TIMIT corpus 
. The standard 462 speaker set with all SA records removed was used for training, and a separate development set of 50 speakers was used for early stopping. Results are reported for the 24-speaker core test set. The audio data was encoded using a Fourier-transform-based filter-bank with 40 coefficients (plus energy) distributed on a mel-scale, together with their first and second temporal derivatives. Each input vector was therefore size 123. The data were normalised so that every element of the input vectors had zero mean and unit variance over the training set. All 61 phoneme labels were used during training and decoding (so), then mapped to 39 classes for scoring . Note that all experiments were run only once, so the variance due to random weight initialisation and weight noise is unknown.
As shown in Table 1, nine RNNs were evaluated, varying along three main dimensions: the training method used (CTC, Transducer or pretrained Transducer), the number of hidden levels (1–5), and the number of LSTM cells in each hidden layer. Bidirectional LSTM was used for all networks except CTC-3l-500h-tanh, which had
units instead of LSTM cells, and CTC-3l-421h-uni where the LSTM layers were unidirectional. All networks were trained using stochastic gradient descent, with learning rate, momentum and random initial weights drawn uniformly from . All networks except CTC-3l-500h-tanh and PreTrans-3l-250h were first trained with no noise and then, starting from the point of highest log-probability on the development set, retrained with Gaussian weight noise () until the point of lowest phoneme error rate on the development set. PreTrans-3l-250h was initialised with the weights of CTC-3l-250h, along with the weights of a phoneme prediction network (which also had a hidden layer of 250 LSTM cells), both of which were trained without noise, retrained with noise, and stopped at the point of highest log-probability. PreTrans-3l-250h was trained from this point with noise added. CTC-3l-500h-tanh was entirely trained without weight noise because it failed to learn with noise added. Beam search decoding was used for all networks, with a beam width of 100.
The advantage of deep networks is immediately obvious, with the error rate for CTC dropping from 23.9% to 18.4% as the number of hidden levels increases from one to five. The four networks CTC-3l-500h-tanh, CTC-1l-622h, CTC-3l-421h-uni and CTC-3l-250h all had approximately the same number of weights, but give radically different results. The three main conclusions we can draw from this are (a) LSTM works much better than for this task, (b) bidirectional LSTM has a slight advantage over unidirectional LSTMand (c) depth is more important than layer size (which supports previous findings for deep networks ). Although the advantage of the transducer is slight when the weights are randomly initialised, it becomes more substantial when pretraining is used.
We have shown that the combination of deep, bidirectional Long Short-term Memory RNNs with end-to-end training and weight noise gives state-of-the-art results in phoneme recognition on the TIMIT database. An obvious next step is to extend the system to large vocabulary speech recognition. Another interesting direction would be to combine frequency-domain convolutional neural networks with deep LSTM.
“Tandem connectionist feature extraction for conversational speech recognition,”in
International Conference on Machine Learning for Multimodal Interaction, Berlin, Heidelberg, 2005, MLMI’04, pp. 223–231, Springer-Verlag.
“Acoustic modeling using deep belief networks,”Audio, Speech, and Language Processing, IEEE Transactions on, vol. 20, no. 1, pp. 14 –22, jan. 2012.
“An Application of Recurrent Nets to Phone Probability Estimation,”IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 298–305, 1994.
“Speaker-independent phone recognition using hidden markov models,”IEEE Transactions on Acoustics, Speech, and Signal Processing, 1989.