Automatic speech recognition (ASR) systems are made from two majors components, the acoustic model (AM), and the language model
(LM). The AM computes the probability of the acoustic observations given a sentence, and the LM encodes a prior over possible sentences.
Where is the acoustic observation, spans all the possible sentences. 
The Equation (1) is not tractable as the set of sentences is infinite. In practice, we restrict the search space with a beam search using a weighted finite state transducer
(WFST) created from a small n-gram. The output of the search is a list of hypotheses, in the form of an n-best list or a lattice. The hypotheses of this list are then rescored with a stronger language model, such as a bigger n-gram, or a neural network. The LMs are trained separately from the acoustic model, in an unsupervised way to minimise the negative log-likelihood of the data, or equivalently, the perplexity (PPL). Given a sentence, its probability is computed with the joint probability:
In this formulation, the probability of a word depends on the previous words, and in the case of an n-gram, the history use is truncated to the preceding words. This makes n-grams fast and suited for short sentences and decoding, but they can’t model long range dependencies. With RNNLMs the history is theoretically infinite because it’s encoded in a continuous space, however they require a lot more compute, so they can only be used to discriminate between the hypotheses generated by the decoder.
Most of the literature is focusing on minimising the PPL of the models, without considering their discriminative power on the tasks they are applied to, like ASR or machine translation. In ASR, the quality of a transcription is evaluated with the word error rate (WER), based on the Levenshtein distance of the transcript against the reference.
Although PPL has initially been shown to correlate with WER , training the RNNLM to minimise the PPL is effectively using a surrogate loss for the rescoring task, and different training schemes can yield models that perform well for WER but poorly for PPL . Moreover, the hypotheses generated by the decoder contain noise from the decoding process, of a form the RNNLM would not have seen in its training data. Additionally, because the language models scores being a sum of log-probabilities, longer sentences tend to have lower scores than shorter ones, which would encourage deletions.
The goal of this paper is to address these shortcomings and tune the RNNLM with a discriminative loss so it can learn to discriminate between noisy hypotheses and give a better score to the ones that are likely to minimise the WER. Unlike , we chose to evaluate the discriminative techniques that have been proven to work for acoustic model training.
2 Related work
2.1 Acoustic models
Discriminative training for AMs has been developed first for gaussian mixture models (GMMs), but has then been successfully applied to neural networks. . These sequence discriminative criteria compute probabilities over all possible decoding paths, which is not tractable. Thus they are approximated with lattices and alignments obtained from decoding the training data with a beam search, so the models have to be pre-trained with another objective function first, usually cross-entropy (CE).
 evaluates many different criteria from the families of maximum mutual information (MMI) and level minimum bayes risk (MBR); and state-level minimum bayes risk (sMBR) is found to produce the best model. In , a lattice-free version of MMI (LF-MMI) is used the train the neural networks, relaxing the need to pre-train with cross entropy, and they show that the network can still be improved using sMBR. In  a word level MBR objective is defined, called edit-based MBR (EMBR), which brings the objective function even closer to the WER, and provides improvements over sMBR.
2.2 Language models
There is some literature about discriminative training of language models, but most of it pre-dates the use of neural networks in language modelling, and use engineered features to train a perceptron or a SVM classifier.
More recently,  presented an RNNLM to discriminate between the reference transcript and the one-best hypothesis by optimising the differences in their cross-entropies. The WER improvement they report is marginal, and 
provide an improvement of their loss function by introducing a margin loss, which they callLarge Margin Language Model:
Where is the number of candidates in an n-best list. This encourages the score of the reference to be greater than the hypotheses’ by . They also propose a similar objective where the candidates are ranked with each other such that the ones with lower WER are given a better score. These losses are used to fine-tune an RNNLM, and they show a more substantial gain in WER. They also note that the PPL of the models produced by the fine-tuning is greatly degraded.
This large margin loss is similar to the contrastive entropy loss introduced by , which aims to quantify the ability of a language model to discriminate between a sentence and artificially noised versions of it. They show that a model trained with this loss has a better discriminative power, however no ASR results are presented.
3 Minimum word error loss
The aim of this work is to train the RNNLM to assign scores to the lattice arcs to minimise the true metric of interest, which for ASR is the WER.
We wish to apply the discriminative training techniques that have been proven to work with acoustic models. We chose to evaluate MBR training, which minimises the expected loss over all the paths in the lattice:
Where can be any loss measuring the distance between the reference and a hypothesis , and is the probability of given the acoustic input and the model parameters.
We use the same loss as in , the EMBR. In this formulation is the word level edit distance, the same as the one used to compute the WER. Moreover, since the RNNLM operates on words in the lattice, rather than acoustic states, EMBR appears to be more appropriate than MPE or sMBR.
3.2 Computation of the loss
The WER doesn’t decompose additively over the arcs of the lattice [11, 12], so it doesn’t obey the ring composition rules usually used in the FST algorithms. This makes computing Equation (4) impractical when lattices are generated on the fly during acoustic model training.  approximates this expectation by sampling paths from the lattice, and in  the n-best paths are selected instead. In , another approximation of the WER contribution of each arc is formulated by using a time alignment of the lattice with the reference.
In these papers, the use of an approximation is motivated by the need of a fast way to compute the loss. However, in our training regime, the acoustic model is fixed, and the lattices generated by the decoder don’t depend on the RNNLM, therefore we pre-compute and process them to obtain the exact WER information for each arc using the algorithm described in . This allows us to avoid using an approximation, and in practice we observe that the lattices with WER information are on average only 20 larger than the original ones. The EMBR can then be computed with the same forward-backward algorithm as described in .
3.3 Lattice rescoring
Rescoring the lattice involves changing the scores present on its arcs. They are generated with an FST created from a small n-gram, usually a pruned 3-gram or 4-gram. When rescoring a lattice with a more powerful model, some states need to be expanded to account for the bigger history used by the language model. RNNLMs have a theoretically infinite history, and would therefore require expansion of the lattice into a tree, which is not practically tractable, because the number of possible paths increases exponentially with the length of the utterance, so approximations have to be used.
3.4 Initialisation and training
Before training with the EMBR loss, we initialise the RNNLM by training it conventionally on large corpora. Instead of cross entropy we use the noise contrastive estimation (NCE) loss, because it’s much faster to train, and yields a self-normalised model which doesn’t require evaluation of a softmax over the whole vocabulary for each arc in the lattice [18, 19]. The reason for this pre-training is because the RNNLM is only effective when trained on large amounts of data; this typically amounts to orders of magnitude more data than is available from reference transcripts.
4 Experimental setup
The language model data used to pre-train the model is 2.5 billion words of general English text and the acoustic modelling data is 2000 hours of transcribed general English, about 900k utterances (21 million words), which is perturbed with point-source and reverberant noise .
We compute our test results on different test sets representing different domains: news, podcast, radio, entertainment, meetings, and political; each of them having a duration of 4 hours, which is between 35,000 and 45,000 words. They have been chosen to be representative of the variability in accents, audio difficulty and language that could be encountered in a real scenario.
4.2 Model description
The training and decoding of acoustic models was performed with Kaldi, using the LF-MMI objective and the time delay neural network architecture similar to the one described in , and we used PyKaldi  to process the lattices in Python.
We trained a 4-gram on the language model data, and pruned it to 60 million n-grams. We use it to expand the lattice, and in all results involving the RNNLM, we interpolate the RNNLM probabilities with the 4-gram, with a weight of 0.9 for the RNNLM and 0.1 for 4-gram.
The RNNLM was trained with NCE using TensorFlow and is agated recurrent unit (GRU), with a single hidden layer of size 512, and vocabulary size of 125,000 words. We trained this model with stochastic gradient descent
(SGD) for 15 epochs, decaying the learning by 4 when the PPL on the validation set didn’t improve by more than 1 after each epoch. We then selected the model that has the best PPL on the validation set.
The EMBR fine-tuning was also performed in TensorFlow, with a fixed learning rate of 0.01 with SGD. We batched lattices by number of states to make the processing more efficient, and we used a batch size of 32. To interpolate with the NCE loss, we used the reference transcript of the lattices used for the EMBR loss.
In order to verify that the source of the improvements are not due to adapting the RNNLM to the acoustic data’s language domain, we also perform two additional fine-tuning experiments. In the first we adapt the RNNLM on the transcripts of the acoustic data, and in the second, we adapt it on the oracle transcripts we get from decoding the acoustic data.
|Model||WER(%) / Relative improvements compared to 4-gram only (%)|
|Radio||Podcast||News 1||News 2||Media 1||Media 2||Meetings||Average|
|RNNLM||9.9 / 12.4||9.0 / 14||12.6 / 7.8||15.8 / 7.9||22.8 / 10.6||34.8 / 5.4||45.4 / 4.0||21.5 / 7.3|
|RNNLM + EMBR||9.4 / 16.9||8.6 / 17.4||12.5 / 8.6||15.7 / 8.8||22.4 / 12.2||34.5 / 6.2||44.9 / 5.2||21.1 / 9.1|
|Adapting on transcripts||9.8||8.8||12.5||15.9||22.7||35.1||45.2||21.4|
|Adapting on oracles||10.0||8.9||12.5||15.9||22.7||35.2||45.2||21.5|
We first evaluate the impact of interpolating with the NCE loss. Figure 1 illustrates the sensitivity of the training to this parameter, and demonstrate that a low but non-negative value of is required to prevent over-fitting. The best model was produced with .
In Table 1 we compare the WER of the baseline RNNLM with the RNNLM+EMBR, and the adaptation experiments. We also report the oracle WER because it’s a lower bound for the WER after rescoring since this operation can’t add new words to the lattice. We find that although adapting the RNNLM on the transcript improves some test sets by a small amount, it degrades others, leaving the average WER close to the original one. We also find that adapting on the oracle transcripts is worse than adapting on ground truth, so adapting on transcripts that have errors doesn’t help teaching the RNNLM to deal with errors in the lattice.
The gains we observe with EMBR training are consistent across all test sets, and it appears that the more effective the RNNLM is at reducing the WER, the more EMBR has an impact. We reported the relative WER reduction brought by the RNNLM and RNNLM+EMBR, and we observe that on average the number of errors fixed by the RNNLM increases by almost 25% after EMBR training, going from 7.3% to 9.1%.
In Table 2 we display the breakdown of the errors of the models on the test sets. We can see that, although EMBR increases the number of insertions and substitutions, it lowers the number of deletions by a greater margin. This goes in line with the findings of , and supports our intuition that PPL-trained LMs tend to prefer shorter sentences.
|RNNLM + EMBR||5147||27609||20504|
One drawback of our method is that our implementation is slow, with each epoch taking several days on a Titan X GPU. As shown in Figure 2, it’s unclear if the model has converged after 3 epochs so improvements to the implementation could yield further gains.
6 Conclusion and future work
We presented a method to fine-tune an RNNLM discriminatively on lattice data by minimising the expected word error rate, and showed that it was effective at reducing the WER on diverse test sets, unlike adapting on transcripts, meaning that the RNNLM doesn’t only learn the language used in the training data, but also learns to give scores that are more likely to reduce the WER. The proposed EMBR training increases the relative gain from rescoring with the RNNLM rather than the 4-gram alone from 7.3% to 9.1%. This corresponds to a 1.9% relative improvement in WER compared to a non fine-tuned RNNLM.
In future we will study the impact of using different WER approximations in the lattice as noted in Section 3.2. We will also optimise our implementation of the rescoring and forward-backward algorithm to make it more efficient to train, enabling us to further investigate the effect of large scale training with the proposed criterion.
Dong Yu and Li Deng,
Automatic Speech Recognition: A Deep Learning Approach, Springer Publishing Company, Incorporated, 2014.
-  Mehryar Mohri, Fernando Pereira, and Michael Riley, “Speech Recognition with Weighted Finite-State Transducers,” in Springer Handbook of Speech Processing, pp. 559–584. Springer Berlin Heidelberg, Berlin, Heidelberg, 2008.
-  Dietrich Klakow and Jochen Peters, “Testing the correlation of word error rate and perplexity,” Speech Communication, vol. 38, no. 1-2, pp. 19–28, sep 2002.
-  Jiaji Huang, Yi Li, Wei Ping, and Liang Huang, “Large Margin Neural Language Model,” Tech. Rep.
-  Daniel Povey and Karel Veselý, “Sequence-discriminative training of deep neural networks,” Interspeech, , no. 1, pp. 3–7, 2013.
-  Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur, “Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI,” sep 2016, pp. 2751–2755.
-  Matt Shannon, “Optimizing expected word error rate via sampling for speech recognition,” jun 2017.
-  Erinç Dikici, Student Member, Murat Semerci, Murat Saraçlar, Ethem Alpaydın, and Senior Member, “Classification and Ranking Approaches to Discriminative Language Modeling for ASR,” IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, vol. 21, no. 2, pp. 291, 2013.
-  Yuuki Tachioka and Shinji Watanabe, “Discriminative method for recurrent neural network language models,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. apr 2015, vol. 2015-Augus, pp. 5386–5390, IEEE.
Kushal Arora and Anand Rangarajan,
“Contrastive Entropy: A new evaluation metric for unnormalized language models,”jan 2016.
-  G. Heigold, W. Macherey, R. Schluter, and H. Ney, “Minimum exact word error training,” in IEEE Workshop on Automatic Speech Recognition and Understanding, 2005. 2005, pp. 186–190, IEEE.
-  Rogier C Van Dalen and Mark J F Gales, “Annotating large lattices with the exact word error,” Tech. Rep., 2015.
-  Rohit Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, and Anjuli Kannan, “Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models,” dec 2017.
-  D. Povey and P.C. Woodland, “Minimum Phone Error and I-smoothing for improved discriminative training,” in IEEE International Conference on Acoustics Speech and Signal Processing. may 2002, pp. I–105–I–108, IEEE.
-  Xunying Liu, Xie Chen, Yongqiang Wang, Mark J. F. Gales, and Philip C. Woodland, “Two Efficient Lattice Rescoring Methods Using Recurrent Neural Network Language Models,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 8, pp. 1438–1449, aug 2016.
Shankar Kumar, Michael Nirschl, Daniel Holtmann-Rice, Hank Liao,
Ananda Theertha Suresh, and Felix Yu,
“Lattice Rescoring Strategies for Long Short Term Memory Language Models in Speech Recognition,”nov 2017.
-  Hainan Xu, Tongfei Chen, Dongji Gao, Yiming Wang, Ke Li, Nagendra Goel, Yishay Carmiel, Daniel Povey, and Sanjeev Khudanpur, “A Pruned Rnnlm Lattice-Rescoring Algorithm for Automatic Speech Recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). apr 2018, pp. 5929–5933, IEEE.
-  Andriy Mnih and Koray Kavukcuoglu, “Learning word embeddings efficiently with noise-contrastive estimation,” 2013.
-  Will Williams, Niranjani Prasad, David Mrva, Tom Ash, and Tony Robinson, “Scaling Recurrent Neural Network Language Models,” feb 2015.
-  Tom Ko, Vijayaditya Peddinti, Daniel Povey, Michael L. Seltzer, and Sanjeev Khudanpur, “A study on data augmentation of reverberant speech for robust speech recognition,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). mar 2017, pp. 5220–5224, IEEE.
-  Doğan Can, Victor R. Martinez, Pavlos Papadopoulos, and Shrikanth S. Narayanan, “Pykaldi: A python wrapper for kaldi,” in Acoustics, Speech and Signal Processing (ICASSP), 2018 IEEE International Conference on. IEEE, 2018.