Due to the ubiquitous existence of speech in daily life, Automatic Speech Recognition (ASR) is gaining significant momentum in recent years. However, current ASR systems primarily rely on scores produced by an Acoustic Model (AM) and a Language Model (LM) to rank the -best lists, and usually the 1-best of the hypotheses is selected as the final recognition result. For computing the LM score, the back-off -gram LM is prominently used for many years due to its simplicity and reliability . However, -gram LM is rather simplistic and heavily limited in its ability of modeling language context such as long-range dependencies.
In order to alleviate the above problem of -gram LM, the mechanism of -best list rescoring is proposed and proven to be effective to significantly improve the ASR performance . For example, Discriminative Language Model (DLM) is proposed in [3, 4, 5] and it utilizes features such as the ASR errors to train a discriminative model for -best list rescoring. With the arise of deep neural network in ASR, RNNLMs [6, 7] and LSTM-based LMs  are becoming popular models for -best list rescoring. More recently, Ogawa et al. 
propose a Encoder-Classifier Model (EC-Model) which trains a classifier to compare between the pairs in-best lists to do the rescoring. With their merits, each of these methods utilizes quite limited information, and the vast arsenal of the state-of-the-art models for gauging linguistic and semantic legitimacy is heavily underused. For example, the common word embeddings (from Word2Vec , Speech2Vec  to BERT ) are hard to utilize under existing rescoring frameworks.
In contrast to the conventional approach that simply adds up a LM score and an AM score for -best list rescoring, we propose a novel Learning-to-Rescore (L2RS) mechanism, which for the first time formalizes the -best list rescoring as a learning problem. L2RS utilizes a wide range of features with automatic optimized weights to rank the -best lists for ASR and selects the most promising one as the final decoding result. The efficacy of L2RS relies on the design of the features. We extract features using BERT sentence embedding, topic vector and perplexity scores given by probabilistic topic models such as LDA , neural network based language models, such as RNNLM , and BERT LM, together with the score given by an acoustic model. By combining all these features together, L2RS learns a rescoring model using RankSVM  algorithm. Since each feature reflects one perspective from the linguistic and semantic legitimacy of the
-best hypotheses, L2RS achieves superior performance by ensembling the information from all these evaluation metrics. The main contribution of the paper is summarized as follows:
To the best of our knowledge, this is the first work that formalizes the -best list rescoring problem as a Learning-to-Score problem for ASR.
We propose a novel L2RS framework dedicated for ASR, which can easily incorporate various state-of-the-art NLP models to extract features. We systematically explore the effectiveness of these features and their combinations, and most of the features such as BERT sentence embedding are used in -best list rescoring for the first time and shown to be quite promising effect.
We conduct extensive experiments based on a public dataset and experimental results shows that L2RS outperforms not only traditional rescoring methods but also its deep neural network counterparts such as RNNLM and EC-Model up to 20.67% improvement which is quite substantial.
In this section, we first give the definition of the L2RS problem, followed by the description of the textual and acoustic features designed for L2RS. Finally, we describe the details of the rescoring model in L2RS.
2.1 Problem Definition
The pipeline of L2RS is listed in Fig. 1. Formally, the ASR system aims to find the optimal textual string for a given acoustic input, denoted as , by the following equation:
where represents a back-off -gram LM, is an AM, is the feature-vector representation of pair including textual features as well as acoustic features, and is the rescoring function learned by L2RS approaches. The third component (i.e., ) is our contribution in this paper, which provides a new framework in ASR opening a lot of research opportunities. During the decoding period, the ASR system generates the -best list which is denoted as , . The order list of -best hypotheses is decided based on the Word Error Rate (WER) of each hypothesis with the ground truth transcript. This composes the training dataset , used for L2RS, where and . During the L2RS prediction step, the ASR system generates the -best list , and the final decoding result can be obtained as follows:
through a Learning-to-Rescore approach, which involves feature extraction, model training and rescoring.
2.2 Textual Features
The textual features used in L2RS are from the lexical level to the semantic level which belongs to six categories: -gram LM, BERT Sentence Embedding, BERT LM, Probabilistic Topic Model LM, Topic Vector and RNNLM.
-gram LM The -gram LM is prominently used due to its simplicity and reliability. In L2RS, we use trigram LM trained using the transcript corpus with the SRILM111http://www.speech.sri.com/projects/srilm/ toolkit.
BERT Sentence Embedding BERT, or Bidirectional Encoder Representations from Transformers , is a powerful new language representation model proposed by Google and obtains the state-of-the-art results on various NLP tasks. The goal of BERT sentence embedding is to represent a variable length -best hypothesis into a fixed length vector, e.g. “hello, nice to meet you” to shown in Fig. 2. Each element of this vector represents the semantics of the original sentence and this vector are further used in L2RS as a representation for each -best hypothesis.
BERT LM BERT can also be used as a LM  to evaluate the quality of the -best hypotheses from the linguistic perspective. In L2RS, we use the perplexity given by a fine-tuned BERT model as a feature of the -best hypotheses.
Probabilistic Topic Model LM Topic Modeling such as LDA  and SentenceLDA  has the ability of capturing the semantic coherence of the -best hypotheses. We first train a topic model based on the transcript corpus, which produces the topic-word distribution ( is the topic index and is the word index). Next, we use the trained model to obtain the topic mixing proportion vector of each hypothesis , which represents the semantic meaning of this hypothesis. Based on these two parameters, we compute a transcript-specific unigram LM by:
Topic Vector Similar to Topic Model LM, L2RS directly uses the trained topic model to infer the -best candidate’s topic mixing proportion vector and this vector is used as a topic representation for each -best hypothesis.
Neural Network-based LM Neural network-based LMs are proven to be effective for -best list rescoring in ASR systems. We train a RNNLM  with the transcript corpus, and the perplexity of each hypothesis given by the RNNLM acts as a feature reflecting the quality of the hypothesis.
2.3 Acoustic Feature
The acoustic feature used in L2RS is the acoustic score given by the acoustic model. Specifically, in L2RS, we trained a “chain” model based on the training data using the Kaldi222https://github.com/kaldi-asr/kaldi toolkit. It should be noted that other features such as speech embedding produced by Speech2Vec  can also be used.
2.4 Rescoring Model
Learning to Rank  is a central problem for information retrieval. There are three categories for Learning to Rank: Pointwise approaches such as McRank , Pairwise approaches such as RankSVM , and Listwise approaches such as SVM MAP . In L2RS, we choose RankSVM to train a rescoring model, and the learning of RankSVM is formalized as the following quadratic programming problem:
where and are two instances from the same -best list, denotes norm, denotes the number of training instances, and is a coefficient.
In this section we conduct experiments on a public dataset to verify the effectiveness of the proposed model.
3.1 Experiment Setup
We use the public TED-LIUM dataset333https://www.openslr.org/51/  in our experiment with the statistics listed in Table 1. For RankSVM444https://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html, the parameters is set to 10. For BERT, we first set up a pretrained BERT model555https://github.com/google-research/bert, and then conduct fine-tuning using the transcript corpus of the training dataset. The dimension of BERT sentence embedding is set to 1024 using method similar as bert-as-service toolkit . For topic modeling, we use LightLDA  and the number of the topics is set to 50. Following , we obtain 50-best for each utterances in the dataset. Our algorithm is compared with the following baseline methods for -best list rescoring: -gram LM, RNNLM , BERT LM , Trigger-based DLM , Cache Model , EC-Model  and Neural Speech-to-Text LM (NS2TLM) . All experiments were conducted on a server with 314GB memory, 72 Intel Core Processor (Xeon), Tesla K80 GPU and CentOS.
|No. of transcripts||774||8||11|
|No. of words||1.5M||17.8k||27.5k|
|No. of segments||56.8k||0.6k||1.5k|
|Length of waves||118hours||1.72hours||3.07hours|
3.2 Experimental Results
3.2.1 Normalized Discount Cumulative Gain (NDCG)
Table 2 lists the rescoring performance of L2RS in terms of NDCG@ . NDCG@ is a measure widely used for reflecting the top quality of the ranking list, and the higher the better. In most cases, the ASR system finally delivers the 1-best result from the rescored -best list. However, some tasks such as ASR in noisy environments or casual-style speech require multiple recognition hypotheses [9, 29]. From the result, we can see that compared with other methods, L2RS can produce better ranking list, which means not only the top 1 result is improved but also the whole ranking list is correctly ordered. Specifically, BERT sentence embedding is quite effective for L2RS and it has 14.58% relative improvement over the baseline AM+-gram LM rescoring method. By incorporating all these features, L2RS(opt) achieves up to 20.67% relative improvement over AM+-gram baseline.
|AM + -gram LM||0.5931||0.5859|
3.2.2 Word Error Rate (WER)
Since our ultimate goal is to improve ASR, we finally examine the effectiveness of L2RS method in terms of WER with results listed in Table 3. The “Oracle” WER is computed using the best result each time from the -best list by comparing with the ground truth transcript, and it is the theoretical ceiling performance of all the rescoring methods. Among all these methods, RNNLM, BERT-LM, Trigger-based DLM, Cache Model, EC-Model, NS2TLM and L2RS(opt) have respectively 1.728%, -0.083%, -1.036%, 0.026%, 0.204%, 0.506% and 2.448% improvement over the baseline -gram LM method in Test dataset. L2RS shows performance improvement over the state-of-the-art rescoring methods by a significant margin. The experiment result validates that by incorporating more valuable features from the state-of-the-art NLP models, L2RS can benefit the current ASR system.
3.2.3 Quantitative Analysis of Features
We use each dimension of these features to train a L2RS model and take the NDCG10 as a measure to reflect the quality of the feature . The result is listed in Fig. 3 with the -axis representing the feature category and the -axis representing their NDCG values. We can see that besides traditional AM and LM scores, other features also provide valuable information from different linguistic and semantic perspectives. Features such as BERT sentence embedding, which is hard to be used under traditional rescoring pipeline, are even more effective than the RNNLM score. L2RS provides a flexible mechanism to explore the effects of these embedding features and their combinations for -best list rescoring.
In this paper, we propose a novel Learning-to-Rescore mechanism for ASR. L2RS formalizes the -best list rescoring as a learning problem, and incorporates comprehensive features with automatic optimized weights to form a rescoring model. Experimental results have indicated that L2RS is quite effective for -best list rescoring and opens a new door for ASR. For future work, we will design neural L2RS models dedicated for ASR systems.
-  Jerome R Bellegarda, “Statistical language model adaptation: review and perspectives,” Speech communication, vol. 42, no. 1, pp. 93–108, 2004.
-  Dan Jurafsky, Speech & language processing, Pearson Education India, 2000.
-  Brian Roark, Murat Saraclar, and Michael Collins, “Discriminative n-gram language modeling,” Comput. Speech Lang., vol. 21, no. 2, pp. 373–392, Apr. 2007.
-  T. Oba, T. Hori, A. Nakamura, and A. Ito, “Round-robin duel discriminative language models,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 4, pp. 1244–1255, May 2012.
Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson,
“Discriminative language modeling with conditional random fields and the perceptron algorithm,”in Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 2004, p. 47.
Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan
Černockỳ, and Sanjeev Khudanpur,
“Recurrent neural network based language model,”in Eleventh annual conference of the international speech communication association, 2010.
-  Tomáš Mikolov, Stefan Kombrink, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur, “Extensions of recurrent neural network language model,” in 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011, pp. 5528–5531.
-  Hakan Erdogan, Tomoki Hayashi, John R Hershey, Takaaki Hori, Chiori Hori, Wei-Ning Hsu, Suyoun Kim, Jonathan Le Roux, Zhong Meng, and Shinji Watanabe, “Multi-channel speech recognition: Lstms all the way through,” in CHiME-4 workshop, 2016, pp. 1–4.
-  Atsunori Ogawa, Marc Delcroix, Shigeki Karita, and Tomohiro Nakatani, “Rescoring n-best speech recognition list based on one-on-one hypothesis comparison using encoder-classifier model,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 6099–6103.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean,
“Distributed representations of words and phrases and their compositionality,”in Advances in neural information processing systems, 2013, pp. 3111–3119.
-  Yu-An Chung and James Glass, “Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech,” arXiv preprint arXiv:1803.08976, 2018.
-  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 4171–4186.
David M Blei, Andrew Y Ng, and Michael I Jordan,
“Latent dirichlet allocation,”
Journal of machine Learning research, vol. 3, no. Jan, pp. 993–1022, 2003.
-  Yunbo Cao, Jun Xu, Tie-Yan Liu, Hang Li, Yalou Huang, and Hsiao-Wuen Hon, “Adapting ranking svm to document retrieval,” in Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2006, pp. 186–193.
-  Alex Wang and Kyunghyun Cho, “Bert has a mouth, and it must speak: Bert as a markov random field language model,” arXiv preprint arXiv:1902.04094, 2019.
-  Georgios Balikas, Massih-Reza Amini, and Marianne Clausel, “On a topic model for sentences,” in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 2016, pp. 921–924.
-  Tie-Yan Liu et al., “Learning to rank for information retrieval,” Foundations and Trends® in Information Retrieval, vol. 3, no. 3, pp. 225–331, 2009.
Ping Li, Qiang Wu, and Christopher J Burges,
“Mcrank: Learning to rank using multiple classification and gradient boosting,”in Advances in neural information processing systems, 2008, pp. 897–904.
-  Ralf Herbrich, “Large margin rank boundaries for ordinal regression,” Advances in large margin classifiers, pp. 115–132, 2000.
-  Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims, “A support vector method for optimizing average precision,” in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2007, pp. 271–278.
-  François Hernandez, Vincent Nguyen, Sahar Ghannay, Natalia Tomashenko, and Yannick Estève, “Ted-lium 3: twice as much data and corpus repartition for experiments on speaker adaptation,” in International Conference on Speech and Computer. Springer, 2018, pp. 198–208.
-  Han Xiao, “bert-as-service,” https://github.com/hanxiao/bert-as-service, 2018.
-  Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric Po Xing, Tie-Yan Liu, and Wei-Ying Ma, “Lightlda: Big topic models on modest computer clusters,” in Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2015, pp. 1351–1361.
-  Xunying Liu, Yongqiang Wang, Xie Chen, Mark J. F. Gales, and Phil Woodland, “Efficient lattice rescoring using recurrent neural network language models,” in IEEE International Conference on Acoustics, 2014.
-  Ke Li, Hainan Xu, Yiming Wang, Daniel Povey, and Sanjeev Khudanpur, “Recurrent neural network language model adaptation for conversational speech recognition,” INTERSPEECH, Hyderabad, pp. 1–5, 2018.
-  Natasha Singh-Miller and Michael Collins, “Trigger-based language modeling using a loss-sensitive perceptron algorithm,” in 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP’07. IEEE, 2007, vol. 4, pp. IV–25.
-  Tomohiro Tanaka, Ryo Masumura, Takafumi Moriya, and Yushi Aono, “Neural speech-to-text language models for rescoring hypotheses of dnn-hmm hybrid automatic speech recognition systems,” in 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2018, pp. 196–200.
-  Christopher Manning, Prabhakar Raghavan, and Hinrich Schütze, “Introduction to information retrieval,” Natural Language Engineering, vol. 16, no. 1, pp. 100–103, 2010.
-  Lidia Mangu, Eric Brill, and Andreas Stolcke, “Finding consensus in speech recognition: word error minimization and other applications of confusion networks,” Computer Speech & Language, vol. 14, no. 4, pp. 373–400, 2000.
Xiubo Geng, Tie-Yan Liu, Tao Qin, and Hang Li,
“Feature selection for ranking,”in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2007, pp. 407–414.