Speech recognition (ASR) technologies develop fast in recent years, partly due to the powerful deep learning approach[1, 2]. An interesting and important task within the ASR research is recognizing multiple languages. One reason that makes the multilingual ASR research attractive is that people from different countries are communicating more frequently today. Another reason is that there are limited resources for most languages, and multilingual techniques may help to improve performance for these low-resource languages.
There has been much work on multilingual ASR, especially with the deep neural architecture. The mostly studied architecture is the feature-shared deep neural network (DNN), where the input and low-level hidden layers are shared across languages, while the top-level layers and the output layer are separated for each language[3, 4, 5]. The insight of this design is that the human languages share some commonality in both acoustic and phonetic layers, and so some signal patterns at some levels of abstraction can be shared.
Despite the brilliant success of the feature-sharing approach, it is only useful for model training, not for decoding. This means that although part of the model structure is shared, in recognition (decoding), the models are used independently for individual languages, with their own language models. Whenever more than one language are supported, the performance on all the languages will be significantly decreased, due to the inter-language competition in the decoding process. This means that the feature-sharing approach cannot deal with true multilingual ASR, or more precisely, multilingual decoding.
A possible solution to the multilingual decoding problem is to inform the decoder which language it is now processing. By this language information, the multilingual decoding essentially falls back to monolingual decoding and the performance is recovered. However, language recognition is subject to recognition mistakes, and it requires sufficient signal to give a reasonable inference, leading to unacceptable delay. Another possibility is to invoke monolingual decoding for each language, and then decide which result is correct, due to either confidence or a language recognizer. This approach obviously requires more computing resource. In Deepspeech2 , English and Chinese can be jointly decoded under the end-to-end learning framework. However, this is based on the fact that the training data for the two languages are both abundant, so that language identities can be learned by the deep structure. This certainly can not be migrated to other low-resource languages, and is difficult to accommodate more languages.
In this paper, we introduce a multi-task recurrent model for multilingual decoding. With this model, the ASR model and the LR model are treated as two components of a unified architecture, where the output of one component is propagated back to the other as extra information. More specifically, the ASR component provides speech information for the LR component to deliver more accurate language information, which in turn helps the ASR component to produce better results. Note that this collaboration among ASR and LR takes places in both model training and inference (decoding).
This model is particularly attractive for multilingual decoding. By this model, the LR component provides language information for the ASR component when decoding an utterance. This language information is produced frame by frame, and becomes more and more accurate when the decoding proceeds. With this information, the decoder becomes more and more confident about which language it is processing, and gradually removes decoding paths in hopeless languages. Note that the multi-task recurrent model was proposed in , where we found that it can learn speech and speaker recognition models in a collaborative way. The same idea was also proposed by , though it focused on ASR only. This paper tests the approach on an English-Chinese bilingual recognition task.
Consider the feature-sharing bilingual ASR. Let represent the primary input feature, and represent the targets for each language respectively, and is the extra input obtained from other component (LR in our experiments). With the informationand respectively, that makes the decoding of two languages absolutely separate. is truly required by multilingual decoding, where means the targets for both two languages. If we regard the extra input as a language indicator, the model is language-aware. Note that the language-aware model is a conditional model with the context as the condition. In contrast, the feature-sharing model, which can be formulated as or , is essentially a marginal model or , which are more complex and less effective for listing .
We refer the bilingual ASR as a single task, with respect to the single task of LR. So
is what we actually compute with the proposed model jointly training ASR and LR, that indicates the two languages use the same Gaussian Mixture Model (GMM) system for generative modeling, though the two languages still use their own phone sets.
We first describe the single-task baseline model and then multi-task recurrent model as in .
2.1 Basic single-task model
The associated computation is as follows:
In the above equations, the terms denote weight matrices and those associated with cells were set to be diagonal in our implementation. The
terms denote bias vectors.and are the input and output symbols respectively; , , represent respectively the input, forget and output gates; is the cell and is the cell output. and are two output components derived from , where is recurrent and fed to the next time step, while is not recurrent and contributes to the present output only.
is the logistic sigmoid function, andand
are non-linear activation functions, often chosen to be hyperbolic.denotes the element-wise multiplication.
2.2 Multi-task recurrent model
The basic idea of the multi-task recurrent model is to use the output of one task at the current frame as an auxiliary information to supervise other tasks when processing the next frame. As there are many alternatives that need to be carefully investigated. In this study, we use the recurrent LSTM model following the setting of  to build the ASR component and the LR component, as shown in Fig. 2. These two components are identical in structure and accept the same input signal. The only difference is that they are trained with different targets, one for phone discrimination and the other for language discrimination. Most importantly, there are some inter-task recurrent links that combine the two components as a single network, as shown by the dash lines in Fig. 2.
Fig. 2 is one simple example, where the recurrent information is extracted from both the recurrent projection and the nonrecurrent projection , and the information is applied to the non-linear function . We use the superscript and to denote the ASR and LR tasks respectively. The computation for ASR can be expressed as follows:
and the computation for LR is as follows:
The proposed method was tested with the Aurora4 and Thchs30 databases labelled with word transcripts. There are 2 language identities, one for English and the other for Chinese. We first present the single-task ASR baseline and then report the multi-task joint training model. All the experiments were conducted with the Kaldi toolkit .
Training set: This set involves the train sets of Aurora4 and Thchs30. It consists of utterances. This set was used to train the LSTM-based single-task bilingual system and the proposed multi-task recurrent system. The two subsets were also used to train monolingual ASR respectively.
Test set: This set involves ‘eval92’ from Aurora4 for English and ‘test’ from Thchs30 for Chinese. These two sets consist of and utterances and were used to evaluate the performance of ASR for English and Chinese respectively.
3.2 ASR baseline
The ASR system was built largely following the Kaldi WSJ s5 nnet3 recipe, except that we used a single LSTM layer for simplicity. The dimension of the cell was , and the dimensions of the recurrent and nonrecurrent projections were set to . The target delay was
frames. The natural stochastic gradient descent (NSGD) algorithm was employed to train the model. The input feature was the -dimensional Fbanks, with a symmetric -frame window to splice neighboring frames. The output layer consisted of units, equal to the total number of pdfs in the conventional GMM system that was trained to bootstrap the LSTM model.
The baseline of monolingual ASR is presented in Table 1, where the two languages were trained and decoded separately. Then we present the baseline of bilingually-trained system in Table 2, where a unified GMM system was shared. As for the latter one, we first decoded the two languages with English and Chinese language models (LMs) respectively, denoted as ‘mono-LM’, and further we merged together the two LMs with a mixture weight of using the tool ngram, so both languages can be decoded within a single unified graph built with weighted finite-state transducers, denoted as ‘bi-LM’.
3.3 Multi-task joint training
Due to the flexibility of the multi-task recurrent LSTM structure, it is not possible to evaluate all the configurations. We explored some typical ones in  and report the results in Table 3. Note that the last configure, where the recurrent information is fed to all the gates and the non-linear activation , is equal to augmenting the information to the input variable .
From the results shown in Table 3 and 4 decoded with mono-LM and bi-LM respectively, we first observe that the multi-task recurrent model improves the performance of English ASR more than that of Chinese. We attribute this to several reasons. First, the auxiliary component was designed to do language recognition and expected to provide extra language information only, but as the English and Chinese databases are not from the same source, the speech signal involves too much channel information, that makes the effect of auxiliary language information decrease when channel classification is done at the same time. Moreover, the channel classification was easily achieved by the regular DNN, then the superiority with an additional LR component decays. Second, from the results in Table 2, we find that when using their respective LMs, English gets gains of performance, while that is not obvious for Chinese, even considering monolingual results in Table 1. Results with mono-LM for Chinese in Table 4 were not far away from that of monolingual and bilingual baselines. All imply that a method for improving speech recognition wanting remarkable improvement for this database configuration may not work well. So it’s not strange that the performance of Chinese could not be improved much in the enhanced model. Furthermore, we have done another test on part of the train set and all the multi-task recurrent models perform better than the baseline on both English and Chinese, which means the recurrent models overfit the train set extremely, that demonstrates the ability of the proposed model.
We also observe that the multi-task recurrent model still has the potential to exceed the baseline, such as when the recurrent information was extracted from the recurrent projection and fed into the activation function, which led to a better performance for both English and Chinese. We suppose, with many more carefully-designed architectures, the baseline will be surpassed more easily.
We report a multi-task recurrent learning architecture for language-aware speech recognition. Primary results of the bilingual ASR experiments on the Aurora4/Thchs30 database demonstrated that the presented method can employ both commonality and diversity of different languages between two languages to some extent by learning ASR and LR models simultaneously. Future work involves using more ideal databases from the same source, developing more suitable architecture for language-aware recurrent training and introducing more than two languages including source-scarce ones.
This work was supported by the National Science Foundation of China (NSFC) Project No. 61371136, and the MESTDC PhD Foundation Project No. 20130002120011.
-  G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” Signal Processing Magazine, IEEE, vol. 29, no. 6, pp. 82–97, 2012.
-  D. Yu and L. Deng, Automatic Speech Recognition - A Deep Learning Approach, ser. Signals and Communication Technology. Springer, 2015.
-  J.-T. Huang, J. Li, D. Yu, L. Deng, and Y. Gong, “Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013, pp. 7304–7308.
-  A. Ghoshal, P. Swietojanski, and S. Renals, “Multilingual training of deep neural networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013, pp. 7319–7323.
-  G. Heigold, V. Vanhoucke, A. Senior, P. Nguyen, M. Ranzato, M. Devin, and J. Dean, “Multilingual acoustic models using distributed deep neural networks,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2013, pp. 8619–8623.
-  D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos et al., “Deep speech 2: End-to-end speech recognition in english and mandarin,” arXiv preprint arXiv:1512.02595, 2015.
-  Z. Tang, L. Li, and D. Wang, “Multi-task recurrent model for speech and speaker recognition,” arXiv preprint arXiv:1603.09643, 2016.
-  X. Li and X. Wu, “Modeling speaker variability using long short-term memory networks for speech recognition,” in Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH), 2015.
-  H. Sak, A. Senior, and F. Beaufays, “Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition,” arXiv preprint arXiv:1402.1128, 2014.
-  H. Sak, A. W. Senior, and F. Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling,” in Proceedings of the Annual Conference of International Speech Communication Association (INTERSPEECH), 2014, pp. 338–342.
-  D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, and P. Schwarz, “The kaldi speech recognition toolkit,” in Proceedings of IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011.
-  D. Povey, X. Zhang, and S. Khudanpur, “Parallel training of deep neural networks with natural gradient and parameter averaging,” arXiv preprint arXiv:1410.7455, 2014.