The task of speech language identification (SLD) is to automatically recognize the language of the given spoken utterance which has been widely used as the front-end of the mixed lingual speech recognition(SR) system . The language of the speech utterance should be recognized firstly,and then the SR system can call the corresponding decode to translate the input speech utterance into right text.
There are thousands of languages in the world and each language has different distinguishing features, many different researches have been worked on developing a universal, quick responsive and effective SLD system. Generally speaking, researches about SLD focuses on two domain, in the front-end domain, researches want to find features which can express the difference between different languages and in the back-end domain, effective classification schemes are needed to distinguish diverse languages.
In the feature domain, raw acoustic features. e.g., linear predictive coding (LPC), filter bank feature, and formation features are early considered . Then, the performance of dynamic features which include temporal information are also investigated. Prosody information, such as the patterns of duration, pitch and stress of languages, is usually used as additional knowledge to improved the performance of raw acoustic features . Token based features, such as phone, syllables and words sequences which contain hight-level character information, are also used to realize the SLD function  .
In the classifier domain, when using acoustic or prosody features as front-ends, strong statistical models are usually selected to build the SLD system. In paper
, different languages are modeled by Gaussian mixture models(GMMs) and log-likelihood ratios are used to make languages identification decision. Hidden Markov models(HMMs) trained by speaker and text independent acoustic feature sequences of different languages are used to construct SLD system in paper
. Neural networks and support vector machines are also used as the back-end to classify different speech languages in. The i-vector based method which have been successfully used in speaker verification tasks are also used to express different languages, following with a task-oriented probabilistic linear discriminant analysis (PLDA) scoring method, the i-vector-PLDA model also achieved significant success in SLD tasks
. When selecting token based features, e.g., phone sequences, as front-ends, n-gram language models(LM) are needed as back-end for each target languages to evaluate the confidence that the input speech match that language, which is called phone recognition and language modelling (PRLM). Different multiple PRLMs based on parallel phone recognition and phone selection on multilingual phone set were discussed in paper .
Recently, with the developing of deep learning technology, many deep neural networks(DNN) base solution are also involved into SLD takes. In paper
, a fully connected feed-forward neural network trained byframes stacking perceptual linear prediction(PLP) features are used to classify languages directly. In 
, convolutional neural networks trained by Mel frequency cepstral coefficient(MFCC) and PLP feature maps are applied to build language classifiers. In
, a DNN is trained to generate frame-level bottleneck (BN) features and these features are used to train an i-Vector based SLD systems. Segment-level X-vectors which are built by mean and standard deviation of BN features are also used in language identification
. Recurrent neural networks (RNN) which can model the temporal information of features are also widely used in SLD takes. In and , long Short-Term Memory(LSTM) and bidirectional LSTM (BLSTM) neural networks are trained to recognize different languages. Many published results show that the DNN based SLD methods perform better than statistic models based methods, such as GMMs and i-vector,especially on short utterances which can not supply sufficient statistical information. While when the length of input utterances shorter than three seconds, even the performance of DNN based SLD systems will decline sharply. In the verbal system of intelligent vehicles, most of the communications are short utterances, so it is very important to build a SLD system that is suitable for very short utterances. The mainly problem of SLD on short utterance is the inadequate information of input speeches, in order to solve this shortcoming, we build a end-to-end SLD system based on transfer learning features and time-scale modification(TSM).
We use a TSM method to expend the length of short input utterances. The speech rate of test short utterances are adjusted by phase vocoder method, by splicing the original speech with a speech rate increased speech and a speech rate decrease speech, the lack of information of short utterance can made up. PLP features concatenating with pitch features generated by length expended speeches are used to train the language classifiers. Many researches shows that the DNN-BN feature generated by a phonetic classifier have more information than raw acoustic features, so we use PLP+pitch features to train an mandarin phoneme classifier firstly and the pre trained DNN is used to extract DNN-BN features which including more useful information. In order to suite for short duration speeches, the generated DNN-BN features frames are packed into small blocks with 100 frames. In order to fit short input utterance with frame length less than 100, we use repeatedly padding method to fill the gaps. Feature blocks are feeded into two layer LSTMs, which are suitable for model feature sequences. Because the output of the last frame contents information of the whole block, a softmax layer is connected with the last frame of LSTM outputs to realize the language classification task.
In the following sections, we first introduce the TSM method used for short utterance length extending in Section II
. The strutters of neural networks used for DNN-BN features extracting and language classification are described in SectionIII. In Section IV, we introduce the experimental configuration including the database used for evaluating SLD models and parameters of SLD models. In this section, we make a comparison between the proposed TSM-DNN-BN-LSTM SLD model and two baseline systems and analyze the experimental results. Some conclusions are made in section V.
Ii Time-scale Modification
Many published results show that the accuracy of SLD systems will decreased heavily by the shorten of input utterances. In order to solve this problem, in this paper, we use a time scale modification (TSM) technology to adjust the speech rate of the input utterance. By concatenating speeches with different speech rates, we can extend length of input utterances and increase information content of test speeches, so as to improved the performance of the SLD system.
TSM technology can change speech rates by changing the length of speeches. In this section we will introduce a classical TSM method, phase vocoder, which can modify the speech rate of input speech without badly damage on pitch and prosody information.
Speech rate usually refers to the speed of pronunciation. Irregular speech rates will decrease the accuracy of continuous SR systems . In speaker verification (SV) system, the mismatch of speech rates between enrollment and verification speeches will also degrade the performance SV system . During acoustic features extraction, the speed rate information will be contained into extracted features, inevitably. In SR and SV tasks, the mismatch of speech rates between training and testing data will decrease the performance and we need to restrain these mismatches . While,in the SLD task, abundance combinations of speech rates information will improve the performance SLD system.
The TSM method can modify the speech rate without changing spectral information, e.g., fundamental frequency and formant. Recently, many different TSM has been proposed and in this paper we select the phase vocoder method. We can use three steps to modify speech rate.
Firstly, the input utterance was segmented into frames with duration, and step size,
. For each frame, a Hanning window was used to reduce the high frequency components. Short-time Fourier transform (STFT) is applied on each frame and the time-frequency informationcan be generated by equation 1.
where x stands for the input speech, is the window function, is the frame index, means the frequency bin index and N is the point number of discrete Fourier transform.
Secondly, compute the amplitude and phase of and then use the idea introduced in paper  to modify the phase into .
Finally, inverse short-time Fourier transform (ISTFT) is used to reconstruct time-domain frame with new phases.
As shown in Fig. 1 ,by summing reconstructed frames using as step size, we can modify the length of input speeches.
When the step size, , in the reconstruction processing is shorter than the step size, in frame segmenting procedure, speech rates of new generated speeches is increased. On the contrary, we can reduce the speech rate of the input speech. We can define the changing rate of original speech rate as
and the length of the reconstructed speech, , is:
Frame duration is set as 2048 (0.128s) and discrete Fourier transform number N is also set as 2048. The step size of reconstructed speech, is set as 512 (32ms). We can observe that when ignoring frame numbers difference before and after TSM and aligning three spectrograms to the same size, three aligned spectrograms are very similar. It means that using TSM method to extending or shorting the length of the same speech will not affect the frequency domain information of original signals,obviously.
Speech rated changed speeches generated by TSM method have less distortion, by concatenating these speeches with original speech can supply more useful information which is helpful to SLD tasks.
Iii Structure of Neural Network based SLD model
In order to make the proposed SLD model suitable for short utterances, in front-end we use TSM method to extend the length of input signals, so as to increase information helpful to language recognition and in back-end we design a DNN based module to generate more meaningful feature to improve the accuracy of SLD.
Iii-a DNN-BN feature extractor
In the early stage of SLD realization, researches tend to train statistical models to present different languages. Specially, i-vector based models trained only by raw acoustic features archived good performance. The performance of statistical models is closely related to the frame length of evaluate utterances. In short duration situation, limited testing feature frames can not supply enough statistical information to SLD models, which will severely affecting the accuracy of language recognitions. In order to make the trained model can adapt to short utterances, we need to adopt some other features including more language discriminative information rather than raw acoustic features.
The theoretical foundation of PRLM model is that languages are discriminated by phonetic properties, this encourages us to build a feature extractor that can extract phonetic information. We build a DNN using morpheme labels as training target to extract DNN-BN features. In order to consider the temporal information, we use concatenated acoustic feature frames as input. The DNN include
hidden layers, the activation function of the bottom
layers is sigmoid function and the top hidden layer is a linear layer which is connected with a softmax output layer.
Because outputs of the top hidden layer is nearest to morpheme classification outputs and include abundant morpheme discriminative information, we use these outputs as DNN-BN features to train language classifiers. Comparing with unit-level token features which are also include phonetic information, frame-level DNN-BN features have higher temporal resolutions and are more suitable for short utterance SLD.
In some paper, researcher tend to use language label as trained target to train DNN classifiers  and want to use a DNN to learn language discriminative information directly, while the language label is too coarse to supply enough supervisory information. Morpheme labels can provide strong supervision and lead the DNN to learn more useful information layer by layer. Morpheme information also has close correlation with language recognition task. Using morpheme label to generate feature for language identification tasks can be thought as some kinds of transfer learning where a related task is used to pre train a model for another tasks.
DNN-BN feature extractor supervised by morpheme labels also has the benefit of cross language, which means that the DNN trained by one language can learn features to recognize other language. It is important for uncommon language recognition which do not have enough training data.
Iii-B Feature block
In order to make the designed SLD model suitable for short utterances, we package DNN-BN features into short-time blocks. In testing phase, input utterances are recognized as block-level and the average log-likelihood value of all blocks are used as speech identification score.
A DNN-BN feature sequence is segmented in to blocks with block size and step size . For long utterances with framed number bigger than , the last frames are packaged separately as a new block. For short utterances with framed number less than , repeating method is used to increase the frame number of input features and then the extended features are packaged following the method used for partitioning long utterances.
In training step,packaging extracted features into partial overlapping small blocks can make the utmost of limited training data and make the trained model adapt to short utterance. When testing short utterances, the features repeating method can supply more effective information to trained language classifiers.
Iii-C LSTM-based Language Classifier
The LSTM is a special kind of RNN, which can learn long-term dependencies. The memory blocks and gates in LSTM cells make it can avoid the long-term dependency problem. LSTMs are suitable for modeling feature sequences and have good performance on SLD tasks. In this paper we train a LSTM model with two layers to realize the language classification. As shown in Fig. LABEL:fig:fig_lstm, the language classifier is trained by DNN-BN feature blocks with frames. The LSTM based model can build a mapping from input feature sequences to hidden layer outputs . Because the last output frame of the top hidden layer, , is generated by all input features, it can present the information of the whole input feature block. We feed
Insight into the LSTM cell, a popular ”peephole connections”  structure is selected. As shown in Fig. LABEL:fig:fig_lstm, square icons stand for neural network layers and circular icons mean point-wise operation. The associated computation is given as fellows:
where ”” means matrix multiplication and ”” means points-wise multiplication.
The proposed SLD model is evaluated on the AP17-OLR databases which are used for the second oriental language recognition challenge . The database is originally created by Speechocean and Multilingual Minorlingual Automatic Speech Recognition (M2ASR). In the databases, there are totally 10 languages including Kazakh in China (ka-cn) , Tibetan in China (ti-cn), Uyghur in China (uy-id), Cantonese in China Mainland and Hongkong (ct-cn), Mandarin in China (zh-cn), Indonesian in Indonesia (id-id), Japanese in Japan (ja-jp), Russian in Russia (ru-ru), Korean in Korea (ko-kr), and Vietnamese in Vietnam (vi-vn).
The database is divide into a train/dev part and a test part, details about speaker number and total utterances of each language are described in Table I. Male and female speakers and utterances of each speaker are balanced. Speakers in train/dev and test subsets have no overlap.
All utterances were recorded by mobile phones, with a sampling rate of 16kHz and a sample size of 16 bits. In the train/dev subset each language has about 10 hours recordings can be used for SLD model training. In order to investigate the performance of trained SLD model on short duration signals, the AP17-OLR database also supply some short duration subsets, including train-1s, train-3s, dev-1s, dev-3s, test-1s and test-3s, which are randomly segmented from train/dev and test subset, respectively.
Iv-B Experimental configurations
In front-end, acoustic and prosody features are used for SLD model training. The input utterances are segmented into frames with 25ms length and 10ms step size, for each frame 150 dimensional PLP coefficients (50 + +
) concatenating with 3 dimensional pitch features are extracted. A global mean and variance vectors are use to normalize extracted features. A pertained DNN base voice activity detector (VAD) is used to remove silence frames.
All the neural networks are trained by Tensorflow. In the DNN-BN feature extractor training phase, as shown in fig. LABEL:fig:fig_DNN, 11 frames concatenating PLP+pitch feature are used for training data. The phoneme discrimination DNN include five hidden layers and the nodes number of each hidden layer are all set as 512 which means the dimension of DNN-BN features generated by the linear outputs of the top hidden layer is also 512. About 500 hours Mandarin Chinese speech collected from Sogou speech input method platform are used to train the phoneme discriminator. Input features are tagged to 6294 triphone label by a trained acoustic model. The DNN based phoneme classifier is trained by 1683 dimension (153
11 = 1683 ) features and 6294 dimension target labels, using cross entropy as cost function and stochastic gradient descent (SGD) as optimization method. The learning rate is set as 0.001, the training epoch is set as 50, and the min-batch size is set as 256.
After phoneme classifier training, the trained DNN is used as a feature extractor to generate DNN-BN features. As described in Section III-B, produced features with 512 dimensions are segmented into blocks with block size (about one second) and step size .
As described in Fig. LABEL:fig:fig_lstm, packaged short-time feature blocks are send into a language classifier with two LSTM layers. The output nodes number of two LSTM layers are all set as 512. The nodes number of Relu layer is set as 1024. The dimension of softmax layer is 10 which stands for languages to be identified. In language classifier, we also select cross entropy as cost function and an Adam optimizer is used to update parameter in language classifier. The learning rate is set as 0.0002 and the training epoch is set as 50. During training only the parameters in the language classifier parts are updated the parameters in DNN-BN feature extractor part are fixed.
Iv-C Baseline systems
We build two baseline SLD systems base on i-vector model and LSTM model trained by PLP + pitch features with 153 dimension.
In the i-vector model, the universal background model(UBM) with 2048 Gaussian mixtures are trained by utterances in AP17-OLR database. The dimension of i-vectors is set as 400. The mean i-vector of one language in the train/dev subset can be to model that language. The score of a test utterance on a particular language can be computed by the cosine distance between the i-vector of the test speech and the language model i-vector generated from train/dev subset.
The structure of the baseline LSTM model is similar as the language classifier described in Section III-C. Instead of DNN-BN features, packaged raw PLP + pitch feature blocks with 100 frames length and 50 frames step size are used for model training. Mean log-likelihood, computed by outputs of the softmax layer is used as language identification scores.
Iv-D Experimental results
As in LRE15, performances of different SLD systems are evaluated by
and equal error rate (EER). The pair-wise loss that composes the missing and false alarm probabilities for a particular target/non-target language pair is defines as:
where and are the target and non-target languages, respectively; and are the missing and false alarm probabilities, respectively.
is the prior probability for the target language, which is set to 0.5 in the evaluation.is defined as the average of the above pair-wise performance:
where is the number of languages.
Performances of different SLD models are evaluated on full time test subset ,test-all and two short duration subset, test-3s and test-1s. Utterances level and EER of different models are shown in Table II.
Firstly, we compared the performance of two baseline systems, it can be observed that on the full time data sets, the statistical-based i-vector model performs a little better than neural network based LSTM model, while in the short duration test-1s, because the test utterance can not supply enough statistical information, the LSTM model perform better than the i-vector.
Secondly, we change the training feature from raw acoustic features to DNN-BN features (DNN-BN-LSTM model), the performance of SLD has been significantly improved. It indicates that phoneme distinguishing features are very useful for the SLD task. When we have abundant training data, more complex targets labels can help neural networks to learn features with richer information.
Then, as described in SectionII, the phase vocoder based TSM method is used to extend the length of test utterances (TSM-DNN-BN-LSTM model). Here, we set the speech rate changing parameter as 0.8 and 1.2, which means the original speech are concatenate with a speed increased and a speed decreased speech.
From the results in Table II we can see, the TSM based length expending method can improve the accuracy of speech identification. Without changing parameters of trained neural networks, just by simple preprocessing of input waveform signals, the error rate of SLD system can decline about 50 % on long duration data set (test-all, test-3s) and about 30 % on very short duration speeches (test-1s).
In order to investigate, the affect of speech rate to SLD accuracy, we try some different speed rate changing combination and evaluate their performances on test-1s data set.
From the results in Table III it can be find that, concatenating some speech changed utterances together can improve the accuracy of SLD comparing with the original short signals. Splicing the original speech with a speech rate increased and a speech rate decreased signal can improve the performance better than splicing two speech rate increased speech or two speech rate decreased speech. The speech rate changing should be moderate, too big speech rate changing will decrease SLD accuracy.
In this paper we propose a end-to-end an end-to-end speech language identification(SLD) model. Three measures are used to make the trained model can suitable to short utterances. In the waveform domain, we use a time-scale modification(TSM) method to extend the length of input utterances. In the feature domain, we use the transfer learning idea to train a deep phoneme classifier, bottleneck features of the phoneme classifier which include phoneme discriminative information are used to train language classifiers. In the language classifier domain, a LSTM base classifier are trained by short time feature blocks which can make the trained model fitting for short duration inputs. The experimental results on AP17-OLR database show that comparing with the i-vector model and simple LSTM model, the proposed method can significantly enhance the perforce SLD, especially on short duration utterance. The structure of proposed SLD model is very simple, the trained model only occupy about 20M hard disk space. The improvement measures on waveform can avoid the changing on SLD model, the short-time block segmentation idea can improve the operation speed of LSTM based language classifier. All the things are suitable for SLD tasks in intelligent cars, which need a small model and quick responses.
-  M. A. Zissman, “Comparison of four approaches to automatic language identification of telephone speech,” IEEE Transactions on speech and audio processing, vol. 4, no. 1, p. 31, 1996.
-  Z. Ma and A. Leijon, “Modeling speech line spectral frequencies with dirichlet mixture models,” in Proceedings of INTERSPEECH, 2010.
-  J. Taghia and A. Leijon, “Variational inference for Watson mixture model,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1886–1900, 2015.
Z. Ma and A. Leijon, “Human skin color detection in rgb space with bayesian estimation of beta mixture models,” inProceedings of European Signal Processing Conference, 2010.
-  Z. Tang, D. Wang, Y. Chen, L. Li, and A. Abel, “Phonetic temporal neural model for language identification,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 1, pp. 134–144, 2018.
-  D. Cimarusti and R. Ives, “Development of an automatic identification system of spoken languages: Phase i,” in Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’82., vol. 7, pp. 1661–1663, IEEE, 1982.
-  J. Foil, “Language identification using noisy speech,” in Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP’86., vol. 11, pp. 861–864, IEEE, 1986.
-  P. A. Torres-Carrasquillo, E. Singer, M. A. Kohler, R. J. Greene, D. A. Reynolds, and J. R. Deller Jr, “Approaches to language identification using gaussian mixture models and shifted delta cepstral features,” in Seventh International Conference on Spoken Language Processing, 2002.
-  Y. K. Muthusamy, “A segmental approach to automatic language identification,” 1993.
-  Z. Ma and A. Leijon, “Human audio-visual consonant recognition analyzed with three bimodal integration models,” in Proceedings of INTERSPEECH, 2009.
P. Xu, K. Li, Z. Ma, Y.-Z. Song, L. Wang, and J. Guo, “Cross-modal subspace learning for sketch-based image retrieval: A comparative study,” inProceedings of IEEE International Conference on Network Infrastructure and Digital Content, 2016.
-  H. Zhou, N. Zhang, D. Huang, Z. Ma, W. Hu, and J. Guo, “Activation force-based air pollution tracing,” in Proceedings of IEEE International Conference on Network Infrastructure and Digital Content, 2016.
-  Z. Wang, Y. Qi, J. Liu, and Z. Ma, “User intention understanding from scratch,” in IEEE International Workshop on Sensing, Processing and Learning for Intelligent Machines, 2016.
-  Y. Chen, Y. Li, F. Qi, Z. Ma, and H. Zhang, “Cycled merging registration of point clouds for 3d human body modeling,” in IEEE International Workshop on Sensing, Processing and Learning for Intelligent Machines, 2016.
-  H. Yu, Z. Ma, M. Li, and J. Guo, “Histogram transform model uding mfcc features for text-independent speaker identification,” in Proceedings of IEEE Asilomar Conference on Signals, Systems, and Computers, 2014.
-  P. K. Rana, Z. Ma, J. Taghia, and M. Flierl, “Multiview depth map enhancement by variational bayes inference estimation of dirichlet mixture models,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2013.
-  Z. Ma, R. Martin, J. Guo, and H. Zhang, “Nonlinear estimation of missing lsf parameters by a mixture of dirichlet distributions,” in Proceedings of International Conference on Acoustics, Speech, and Signal Processing, 2014.
-  T. J. Hazen and V. W. Zue, “Segment-based automatic language identification,” The Journal of the Acoustical Society of America, vol. 101, no. 4, pp. 2323–2331, 1997.
-  J.-L. Rouas, J. Farinas, F. Pellegrino, and R. André-Obrecht, “Modeling prosody for language identification on read and spontaneous speech,” in Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003 IEEE International Conference on, vol. 6, pp. I–40, IEEE, 2003.
-  D. Zhu, M. Adda-Decker, and F. Antoine, “Different size multilingual phone inventories and context-dependent acoustic models for language identification,” in Ninth European Conference on Speech Communication and Technology, 2005.
-  T. Schultz, I. Rogina, and A. Waibel, “Lvcsr-based language identification,” in icassp, pp. 781–784, IEEE, 1996.
Z. Ma and A. Leijon, “A probabilistic principal component analysis based hidden markov model for audio-visual speech recognition,” inProceedings of IEEE Asilomar Conference on Signals, Systems, and Computers, 2008.
——, “Expectation propagation for estimating the parameters of the beta distribution,” inProceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2010.
-  J. L. Hieronymus and S. Kadambe, “Robust spoken language identification using large vocabulary speech recognition,” in Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on, vol. 2, pp. 1111–1114, IEEE, 1997.
-  Z. Ma, A. Leijon, Z.-H. Tan, and S. Gao, “Predictive distribution of the dirichlet mixture model by local variational inference,” Journal of Signal Processing Systems, vol. 74, no. 3, pp. 359–374, Mar 2014.
-  Z. Ma, S. Chatterjee, W. Kleijn, and J. Guo, “Dirichlet mixture modeling to estimate an empirical lower bound for LSF quantization,” Signal Processing, vol. 104, no. 11, pp. 291–295, Nov. 2014.
-  Z. Ma, H. Li, Q. Sun, C. Wang, A. Yan, and F. Starfelt, “Statistical analysis of energy consumption patterns on the heat demand of buildings in district heating systems,” Energy and Buildings, vol. 85, pp. 464–472, Dec. 2014.
-  Z. Ma, J. Taghia, W. B. Kleijn, A. Leijon, and J. Guo, “Line spectral frequencies modeling by a mixture of von mises cfisher distributions,” Signal Processing, vol. 114, pp. 219–224, Sept. 2015.
Z. Ma, Z.-T. Tan, and J. Guo, “Feature selection for neutral vector in EEG signal classification,”NEUROCOMPUTING, vol. 174, pp. 937–945, 2016.
-  J. Willmore, R. Price, and W. Roberts, “Comparing gaussian mixture and neural network modelling approaches to automatic language identification of speech,” in Aust. Int. Conf. Speech Sci. & Tech, pp. 74–77, 2000.
-  K. Wong and M.-h. Siu, “Automatic language identification using discrete hidden markov model,” in Eighth International Conference on Spoken Language Processing, 2004.
-  S. Nakagawa, Y. Ueda, and T. Seino, “Speaker-independent, text-independent language identification by hmm,” in Second International Conference on Spoken Language Processing, 1992.
-  S. C. Kwasny, B. L. Kalman, W. Wu, and A. M. Engebretson, “Identifying language from speech: An example of high-level, statistically-based feature extraction,” in Proceedings 14th Annual Conference of the Cognitive Science Society, 1992.
-  W. M. Campbell, E. Singer, P. A. Torres-Carrasquillo, and D. A. Reynolds, “Language recognition with support vector machines,” in ODYSSEY04-The Speaker and Language Recognition Workshop, 2004.
-  Z. Ma, “Bayesian estimation of the dirichlet distribution with expectation propagation,” in Proceedings of European Signal Processing Conference, 2012.
-  N. Dehak, P. A. Torres-Carrasquillo, D. Reynolds, and R. Dehak, “Language recognition via i-vectors and dimensionality reduction,” in Twelfth annual conference of the international speech communication association, 2011.
-  D. Martinez, O. Plchot, L. Burget, O. Glembek, and P. Matějka, “Language recognition in ivectors space,” in Twelfth Annual Conference of the International Speech Communication Association, 2011.
-  P. Matejka, L. Burget, P. Schwarz, and J. Cernocky, “Brno university of technology system for nist 2005 language recognition evaluation,” in Speaker and Language Recognition Workshop, 2006. IEEE Odyssey 2006: The, pp. 1–7, IEEE, 2006.
-  Z. Ma and A. Leijon, “Pdf-optimized lsf vector quantization based on beta mixture models,” in Proceedings of INTERSPEECH, 2010.
-  I. Lopez-Moreno, J. Gonzalez-Dominguez, O. Plchot, D. Martinez, J. Gonzalez-Rodriguez, and P. Moreno, “Automatic language identification using deep neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 5337–5341, IEEE, 2014.
-  A. Lozano-Diez, R. Zazo-Candil, J. Gonzalez-Dominguez, D. T. Toledano, and J. Gonzalez-Rodriguez, “An end-to-end approach to language identification in short utterances using convolutional neural networks,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015.
-  M. Jin, Y. Song, I. Mcloughlin, L.-R. Dai, and Z.-F. Ye, “Lid-senone extraction via deep neural networks for end-to-end language identification,” in Proc. of Odyssey, 2016.
-  Q. Sun, H. Li, Z. Ma, C. Wang, J. Campillo, Q. Zhang, F. Wallin, and J. Guo, “A comprehensive review of smart energy meters in intelligent energy networks,” IEEE Internet of Things Journal, vol. 3, no. 4, pp. 464–479, Aug 2016.
-  X. Ma, J. Zhang, Y. Zhang, and Z. Ma, “Data scheme-based wireless channel modeling method: motivation, principle and performance,” Journal of Communications and Information Networks, vol. 2, no. 3, pp. 41–51, Sep 2017.
Z. Si, H. Yu, and Z. Ma, “Learning deep features for DNA methylation data analysis,”IEEE ACCESS, vol. 4, pp. 2732–2737, June 2016.
-  Z. Ma and A. E. Teschendorff, “A variational Bayes beta mixture model for feature selection in DNA methylation studies,” Journal of Bioinformatics and Computational Biology, vol. 11, no. 4, 2013.
-  P. Matejka, L. Zhang, T. Ng, H. S. Mallidi, O. Glembek, J. Ma, and B. Zhang, “Neural network bottleneck features for language identification,” in Proceedings of Odyssey, vol. 2014, pp. 299–304, 2014.
-  Y. Song, B. Jiang, Y. Bao, S. Wei, and L.-R. Dai, “I-vector representation based on bottleneck features for language identification,” Electronics Letters, vol. 49, no. 24, pp. 1569–1570, 2013.
-  D. Snyder, D. Garcia-Romero, A. McCree, G. Sell, D. Povey, and S. Khudanpur, “Spoken language recognition using x-vectors,” in Odyssey: The Speaker and Language Recognition Workshop, Les Sables d Olonne, 2018.
-  J. Gonzalez-Dominguez, I. Lopez-Moreno, H. Sak, J. Gonzalez-Rodriguez, and P. J. Moreno, “Automatic language identification using long short-term memory recurrent neural networks,” in Fifteenth Annual Conference of the International Speech Communication Association, 2014.
-  S. Fernando, V. Sethu, E. Ambikairajah, and J. Epps, “Bidirectional modelling for short duration language identification,” in Proc. Interspeech 2017, pp. 2809–2813, 2017.
-  J. Yuan, M. Liberman, and C. Cieri, “Towards an integrated understanding of speaking rate in conversation,” in Ninth International Conference on Spoken Language Processing, 2006.
-  S. Goldwater, D. Jurafsky, and C. D. Manning, “Which words are hard to recognize? prosodic, lexical, and disfluency factors that increase speech recognition error rates,” Speech Communication, vol. 52, no. 3, pp. 181–200, 2010.
-  C. J. van Heerden, E. Barnard, E. Van Heerden, et al., “Speech rate normalization used to improve speaker verification,” in Proceedings of the Symposium of the Pattern Recognition Association of South Africa, pp. 2–7, Citeseer, 2007.
-  D. Wang and S. S. Narayanan, “Robust speech rate estimation for spontaneous speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 8, pp. 2190–2201, 2007.
-  Y. Nejime, T. Aritsuka, T. Imamura, T. Ifukube, and J. Matsushima, “A portable digital speech-rate converter for hearing impairment,” IEEE transactions on rehabilitation engineering, vol. 4, no. 2, pp. 73–83, 1996.
-  Z. Ma, P. K. Rana, J. Taghia, M. Flierl, and A. Leijon, “Bayesian estimation of Dirichlet mixture model with variational inference,” Pattern Recognition, vol. 47, no. 9, pp. 3143–3157, 2014.
-  Z. Ma and A. Leijon, “Bayesian estimation of beta mixture models with variational inference.” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2160–73, 2011.
-  M. Xiaoxiao, Z. Jian, S. Hongbin, Z. Ruohua, and Y. Yonghong, “Expanding the length of short utterances for short-duration language recognition,” Journal of Tsinghua University (Science and Technology), vol. 58, no. 3, pp. 254–259, 2018.
-  Z. Ma, A. Leijon, and W. B. Kleijn, “Vector quantization of LSF parameters with a mixture of Dirichlet distributions,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 21, no. 9, pp. 1777–1790, Sept 2013.
-  P. Xu, Q. Yin, Y. Huang, Y.-Z. Song, Z. Ma, L. Wang, T. Xiang, W. B. Kleijn, and J. Guo, “Cross-modal subspace learning for fine-grained sketch-based image retrieval,” NEUROCOMPUTING, vol. 278, pp. 75–86, Feb. 2018.
-  H. Yu, Z. H. Tan, Z. Ma, R. Martin, and J. Guo, “Spoofing detection in automatic speaker verification systems using DNN classifiers and dynamic acoustic features,” IEEE Transactions on Neural Networks and Learning Systems, vol. PP, no. 99, pp. 1–12, 2018.
-  Z. Ma, J. H. Xue, A. Leijon, Z. H. Tan, Z. Yang, and J. Guo, “Decorrelation of neutral vector variables: Theory and applications,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 1, pp. 129–143, Jan 2018.
-  Z. Ma, J. Xie, H. Li, Q. Sun, Z. Si, J. Zhang, and J. Guo, “The role of data analysis in the development of intelligent energy networks,” IEEE Network, vol. 31, no. 5, pp. 88–95, 2017.
-  J. Laroche and M. Dolson, “Improved phase vocoder time-scale modification of audio,” IEEE Transactions on Speech and Audio Processing, vol. 7, no. 3, pp. 323–332, 1999.
-  Y. Tian, L. He, Y. Liu, and J. Liu, “Investigation of senone-based long-short term memory rnns for spoken language recognition,” in Proc. Odyssey, pp. 89–93, 2016.
-  F. A. Gers and J. Schmidhuber, “Recurrent nets that time and count,” in Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium, vol. 3, pp. 189–194, IEEE, 2000.
-  Z. Tang, D. Wang, Y. Chen, and Q. Chen, “Ap17-olr challenge: Data, plan, and baseline,” in Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017, pp. 749–753, IEEE, 2017.
M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
S. Ghemawat, G. Irving, M. Isard, et al.
, “Tensorflow: a system for large-scale machine learning.,” inOSDI, vol. 16, pp. 265–283, 2016.
-  Z. Ma, A. E. Teschendorff, A. Leijon, Y. Qiao, H. Zhang, and J. Guo, “Variational Bayesian matrix factorization for bounded support data,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 4, pp. 876–89, 2015.
-  J. Taghia, Z. Ma, and A. Leijon, “Bayesian estimation of the von-Mises Fisher mixture model with variational inference,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 9, pp. 1701–1715, Sept 2014.