. A Seq2Seq model consists of two components: an encoder encodes the acoustic feature sequence into a high level representation, and a decoder generates the corresponding word sequence. The encoder leverages attention mechanism to fuse extracted features into a fixed-dimensional vector for capturing global semantic information of a speech signal. The decoder is a conditional language model (LM) to capture linguistic information of transcriptions. During decoding stage, the decoder predicts the current word in terms of the acoustic encoding of the encoder, history context, and the previous word at each step. This architecture is also referred to as Listen, Attend, and Spell.
Compared with speech transcriptions, abundant unsupervised text corpora, which have rich linguistic information, are easier to obtain. Large scale external text data is commonly used to train language models (LMs) to improve ASR performance in conventional hidden Markov model (HMM) or connectionist temporal classification (CTC) based ASR pipelines. However, because the encoder and the decoder are optimized jointly, it is non-trivial to integrate an external LM into a Seq2Seq model.
Shallow fusion and deep fusion are two approaches to integrating an LM into a Seq2Seq model . Shallow fusion performs log-linear interpolation between the decoder of a Seq2Seq model and an external LM during beam search. The external LM can be -gram LM or neural network language models (NNLMs). It has achieved success in ASR tasks [1, 6]
. Various deep fusion approaches leverage a neural network to fuse hidden representations of the Seq2Seq decoder and the external neural network based LM. Cold fusion and component fusion utilize a pre-trained recurrent neural network language model (RNNLM) and gating mechanism to improve ASR performances [7, 8]. These fusion approaches have shown promising performance. However, the neural network of the external LM increases complexity of the Seq2Seq model. Specifically, the fusion network introduces external parameters into the Seq2Seq model for deep fusion. Both shallow fusion and deep fusion need the external LM during test stage. It introduces external complexity into the ASR system.
We propose a knowledge distillation (KD)  based training approach to integrating an external LM into a Seq2Seq model. First, an RNNLM is trained on large scale text data. Then, the RNNLM is used to generate soft labels of speech transcriptions to train the Seq2Seq model. This training approach is also named as Teacher/Student model: the teacher (RNNLM) provides soft labels as prior knowledge to “teach” the student (Seq2Seq decoder). Thus, we refer to the proposed training approach as “Learn Spelling from Teachers” (LST). LST is simple to implemented: it does not modify the model structure, and only needs to train an RNNLM to generate soft labels. With LST, the external LM is only needed during training, so it does not increase complexity of the model for testing. Furthermore, LST and shallow fusion can be used together to achieve better performance. We conduct experiments on publicly available AISHELL-1111http://openslr.org/33/ dataset  and CLMAD222http://openslr.org/55/ text dataset  to show the effectiveness of the proposed LST. We use Speech-Transformer  as the backbone network. Our proposed approach reduced character error rate (CER) from to . we further utilized shallow fusion for the model trained with LST, and achieved CER of .
The rest of this paper is organized as follows. Section 2 introduces the background. Section 3 introduces the proposed LST. Section 4 introduces the related work. Section 5 describes the experimental results. Section 6 summarizes the paper.
2 Background: Seq2Seq models for ASR
A basic Seq2Seq model is shown in Fig. (a). First, a speech signal is processed into an acoustic feature sequence. Then, an encoder network encodes the sequence into a high level acoustic representation. The encoder can be a recurrent neural network [2, 12] or a transformer 
. The decoder is a conditional LM: given the high level acoustic representation, the previous token, and history context, it predicts the current token. The probability distribution on the vocabulary is computed by a softmax function.
The attention is an important mechanism to capture the relationship between the acoustic representations and the current state of the decoder. The attention scores are computed in terms of the current state of the decoder and the high level acoustic representations, and then the acoustic information and the decoder state are fused.
The encoder and decoder are trained jointly. The training criterion is cross entropy:
where is the index of each token, is the vocabulary size, is the index of the corresponding ground truth token at step , is the previous token, is the history context, is the acoustic features, represents probability, and stands the parameters of the whole network. is if the two variables are equal, and otherwise.
3 Distilling knowledge from external LMs
The basic idea of “Learn Spelling from Teachers” (LST) is: first, train an RNNLM on an external large scale text corpus, and then use this RNNLM to guide Seq2Seq model training. Besides
-of-K hard labels provided by the transcriptions, the RNNLM provides soft labels, which carries the knowledge of the text corpus. The soft labels are probabilities estimated by the RNNLM.Fig. 2 shows the hard labels and soft labels of tokens in the vocabulary at one time step in a sequence. The soft labels contain more information than hard labels, e.g. some tokens have relatively large probabilities, and some tokens have very small probabilities.
Given the context and the previous token, the probability of -th token in vocabulary estimated by the RNNLM is
where is -th node of latent variable before the softmax function, is the vocabulary size, is the previous token, is the history context, and is a parameter called temperature to smooth the outputs.
To make the Seq2Seq model learn the knowledge from the RNNLM, we minimize the Kullback-Leibler divergence (KLD) between estimated probability of the RNNLMand the estimated probability of the Seq2Seq model . Let , and , the KLD is
is fixed during training the Seq2Seq model, the loss function is equivalent to the cross entropy form:
We refer to the above loss as LST loss.
We can simplify the above equation as the label interpolation form:
Thus, compared with the vanilla Seq2Seq model, we just modify the labels rather than the loss function during training stage. combines the knowledge from transcriptions and the knowledge from the LM. The LST is illustrated in Fig. (b).
Comparing Fig. (b) with Fig. (a), we can see that LST is only used for training, and the external RNNLM is removed during testing. So the computation is the same as the original Seq2Seq for testing. In order to achieve better performance, shallow fusion can be further used with LST during decoding. In addition, besides ASR, our proposed LST can be generally used for Seq2Seq models.
4 Related work
Knowledge distillation. KD was proposed for model compression . It is also referred to as teacher-student learning. Yoon et al. proposed to use KD to reduce the size of a Seq2Seq model for machine translation . It has also been used for domain adaptation for acoustic models  and language models . Different from these work, our work focuses on integrating external language models for Seq2Seq ASR systems.
Label smoothing. Label smoothing have been used to prevent the Seq2Seq ASR model making overconfident predictions [4, 3, 16]. It can be seen as a special case of KLD regularization when assuming the prior label distribution is uniform 
. Unlike label smoothing, LST leverages an RNNLM to provide a context-dependent prior distribution rather than a simple uniform distribution. Instead of assumption, the prior distribution is estimated with a data-driven method. Besides solving the overconfident problem, LST introduces knowledge from an external large scale text data corpus.
We use a Chinese corpus AISHELL-1 to evaluate our proposed approach . The training set contains hours of speech ( utterances) recorded by speakers. The development set contains hours of speech ( utterances) recorded by speakers. And the test set contains hours of speech ( utterances) recorded by speakers. The speakers of the training set, development set, and test set are not overlapped. All the recordings of the corpus are in kHz WAV format. The content of the speech is news with different topics.
A subset of CLMAD [11, 18] text dataset is used as external text dataset333This subset of the external text has been shared with OneDrive: https://1drv.ms/u/s!An08U7hvUohBb234-V-Z0Qb_Zcc. We use an open source tool XenC to extract the subset of CLMAD which is topic matched with AISHELL-1 . The preprocessing steps are as follows:
Select million sentences which have small cross entropy with AISHELL-1 training transcriptions ;
Remove the sentences whose lengths are longer than ;
Mix the remained sentences with training transcriptions (which are duplicated times to improve proportion);
Re-segment the word sequences into characters.
The information of the text data is shown in Table 1.
5.2 Experimental setup
In this paper, we employ Speech-Transformer [3, 21], a non-recurrent Seq2Seq model for speech recognition, as the backbone network. Instead of hidden states and recurrent structures of RNNs, the transformer models the context by computing attention directly. Please see [3, 12, 21] for details of the transformer.
The acoustic features are -dimension Mel-filter bank features (FBANK), which are extracted every 10ms with 25ms of frame length. Each frame is spliced with three left frames. So, the input of the network is -dimensional. The sequence is subsampled every three frames. The Speech-Transformer consists of blocks of an encoder and blocks of a decoder. The dimensionality of the model is , and the number of inner nodes of the fully connected feed-forward network is . The number of heads is . The modeling units of the decoder are characters, including three special symbols “unk”, “sos”, “eos”, which represent unknown character, start of a sentence, end of a sentence, respectively. The character embeddings is shared with the output weights of the decoder . Following , we use Adam optimizer with , , . The learning rate is updated as follows:
where is the dimensionality of the model, is the step number, is a tunable parameter, and the learning rate increases linearly for steps. We set , . The model is trained for epochs. There are utterances containing about K frames in one batch. The development set is used for validation. Only the model which achieves the lowest cross entropy on development set is stored as the final model.
The external RNNLM is a two layers of long short-term memory (LSTM) network. The modeling units are the same as the Seq2Seq model. The RNNLM is trained on the external text. The embedding size of the RNNLM is, and the number of LSTM cells of each layer is . The RNNLM is trained on external text. The stochastic gradient decent (SGD) with momentum as the optimizer for training the RNNLM. The momentum is set to , and the learning rate is set to . The RNNLM is trained for epochs.
For decoding, we set beam width to for beam search, and maximum decoding length to .
5.3 Results and analysis
5.3.1 The effectiveness of external text
Firstly, we demonstrate the effectiveness of the external text data and the RNNLM. We compute the perplexities on AISHELL-1 test transcriptions, which are shown in Table 2. Note that the data is in character level, so the perplexities are relatively smaller. We can see that compared with the -gram with Kneser-Ney smoothing trained on training transcriptions, the -gram trained on external text achieves a significant reduction of perplexity. Moreover, the RNNLM achieves about a relative reduction over the -gram trained on the external text.
5.3.2 The impact of hyper-parameters
Table (a) shows the the character error rates (CERs) on development the set with different temperature in Eq. (missing) when is fixed at . The temperature controls the smoothness of the soft labels generated by the RNNLM. When it is too small, the soft labels are too sharp, and the Seq2Seq training is perturbed heavily. When it is too large, the soft labels are too smoothed to affect the training. We can see that when the temperature is set to , the model achieves the best performance.
Then we fix at and evaluate the influence of . The parameter controls the proportion of the ground truth hard labels to the soft labels of the RNNLM. The results are shown in Table (b). We can see that when , the model achieves the best performance on the development set. According to the above results in Table 3, we select and as the final hyper-parameters. We refer to the model trained with and as “Seq2Seq+LST” in the rest.
|-gram (Training Trans.)|
|-gram (Ext. Text)|
|RNNLM (Ext. Text)|
5.3.3 The effectiveness of the proposed approach
Table 4 gives the results on the test set of each model. “Seq2Seq” is the plain Seq2Seq without regularization. Compared to “Seq2Seq”, “Seq2Seq+LST” achieves an relative reduction in character error rate.
We report results of two KLD based regularization approaches, namely label smoothing and unigram smoothing. For label smoothing, the prior label distribution is assumed to be a uniform distribution. The label smoothing achieves a relative reduction in character error rate. For unigram smoothing, the prior label distribution is assumed to be the frequency of each label. The frequency is estimated on the external text. Because the unigram is too sharp, it introduces noises and affects training. We add to the frequencies and re-normalize them to smooth the unigram. We can see that the original unigram hurts the performance, and the smoothed unigram improves the performance. Both label smoothing and unigram smoothing are effective for regularizing the model. The unigram should be smoothed for training to reduce the sharp problem.
From Table 4, we can see that “Seq2Seq+LST” outperforms both label smoothing and unigram smoothing (without shallow fusion). We analyze that the assumptions of label smoothing (uniform distribution) and unigram smoothing (unigram frequency) do not match real situations. However, LST, which is a data-driven approach, does not assume prior distributions.
We further leverage shallow fusion with the RNNLM for each model. The weight of LM is . The RNNLM is the same one which is used for LST. We can see that shallow fusion improves performances for all models. “Seq2Seq + LST” model outperforms “Seq2Seq + Label Smoothing + SF” model (CER ), which demonstrates that LST is an effective way to improve the performance of Seq2Seq models. Moreover, the model which uses LST and shallow fusion together, i.e. “Seq2Seq + LST + SF”, achieves the best CER of .
|Seq2Seq + SF|
|Seq2Seq + Label Smoothing|
|Seq2Seq + Label Smoothing + SF|
|Seq2Seq + Original Unigram Smoothing|
|Seq2Seq + Smoothed Unigram Smoothing|
|Seq2Seq + Smoothed Unigram Smoothing + SF|
|Seq2Seq + Proposed LST|
|Seq2Seq + Proposed LST + SF|
To further show the effect of our proposed approach, we draw the loss curves with baseline “Seq2Seq” and the proposed “Seq2Seq+LST” in Fig. 3. For “Seq2Seq” model, the training loss is lower than validation loss. However, for “Seq2Seq+LST”, the training loss is higher than validation loss. The final validation loss of “Seq2Seq+LST” is a little bit smaller than “Seq2Seq”. This result shows regularization effect of LST.
In this paper, we propose LST training approach to integrating an external RNNLM into a Seq2Seq model. An RNNLM is first trained on large scale external text data. Then, the RNNLM provides soft labels of training transcriptions to train the Seq2Seq model. We used transformer based Seq2Seq as backbone, and conducted experiments on public available Chinese datasets AISHELL-1 (speech) and CLMAD (external text). The experiments demonstrate the effectiveness of our proposed approach. We will try integrating more powerful language models into Seq2Seq systems in the future.
This work is supported by the National Key Research & Development Plan of China (No.2017YFB1002801), the National Natural Science Foundation of China (NSFC) (No.61425017, No.61831022, No.61773379, No.61603390), the Strategic Priority Research Program of Chinese Academy of Sciences (No.XDC02050100), and Inria-CAS Joint Research Project (No.173211KYSB20170061).
-  D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio, “End-to-end attention-based large vocabulary speech recognition,” international conference on acoustics, speech, and signal processing, pp. 4945–4949, 2016.
-  W. Chan, N. Jaitly, Q. Le, and O. Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 4960–4964.
-  L. Dong, S. Xu, and B. Xu, “Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5884–5888.
-  C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina et al., “State-of-the-art speech recognition with sequence-to-sequence models,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4774–4778.
C. Gulcehre, O. Firat, K. Xu, K. Cho, L. Barrault, H. Lin, F. Bougares, H. Schwenk, and Y. Bengio, “On using monolingual corpora in neural machine translation,”arXiv: Computation and Language, 2015.
-  A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, and R. Prabhavalkar, “An analysis of incorporating an external language model into a sequence-to-sequence model,” pp. 5824–5828, 2018.
-  A. Sriram, H. Jun, S. Satheesh, and A. Coates, “Cold fusion: Training seq2seq models together with language models.” pp. 387–391, 2018.
-  S. Changhao, C. Wen, G. Wang, D. Su, M. Luo, and D. Yu, “Component fusion: Learning replaceable language model component for end-to-end speech recognition system,” in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019.
-  G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
-  H. Bu, J. Du, X. Na, B. Wu, and H. Zheng, “AIShell-1: An open-source mandarin speech corpus and a speech recognition baseline,” in 2017 20th Conference of the Oriental Chapter of the International Coordinating Committee on Speech Databases and Speech I/O Systems and Assessment (O-COCOSDA). IEEE, 2017, pp. 1–5.
-  Y. Bai, J. Tao, J. Yi, Z. Wen, and C. Fan, “CLMAD: A chinese language model adaptation dataset,” in The Eleventh International Symposium on Chinese Spoken Language Processing (ISCSLP 2018), 2018.
-  J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, “Attention-based models for speech recognition,” in Advances in neural information processing systems, 2015, pp. 577–585.
-  Y. Kim and A. M. Rush, “Sequence-level knowledge distillation,” arXiv preprint arXiv:1606.07947, 2016.
-  J. Li, M. L. Seltzer, X. Wang, R. Zhao, and Y. Gong, “Large-scale domain adaptation via teacher-student learning,” arXiv preprint arXiv:1708.05466, 2017.
J. Andrés-Ferrer, N. Bodenstab, and P. Vozila, “Efficient language model adaptation with noise contrastive estimation and kullback-leibler regularization,”Proc. Interspeech 2018, pp. 3368–3372, 2018.
-  J. Chorowski and N. Jaitly, “Towards better decoding and language model integration in sequence to sequence models,” Proc. Interspeech 2017, pp. 523–527, 2017.
-  G. Pereyra, G. Tucker, J. Chorowski, Ł. Kaiser, and G. Hinton, “Regularizing neural networks by penalizing confident output distributions,” arXiv preprint arXiv:1701.06548, 2017.
J. Li and M. Sun, “Scalable term selection for text categorization,” in
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007.
-  A. Rousseau, “Xenc: An open-source tool for data selection in natural language processing,” The Prague Bulletin of Mathematical Linguistics, vol. 100, pp. 73–82, 2013.
-  R. C. Moore and W. Lewis, “Intelligent selection of language model training data,” in Proceedings of the ACL 2010 conference short papers. Association for Computational Linguistics, 2010, pp. 220–224.
-  S. Zhou, L. Dong, S. Xu, and B. Xu, “Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese,” arXiv preprint arXiv:1804.10752, 2018.
-  O. Press and L. Wolf, “Using the output embedding to improve language models,” arXiv preprint arXiv:1608.05859, 2016.