System Combination for Short Utterance Speaker Recognition

by   Lantian Li, et al.
Tsinghua University

For text-independent short-utterance speaker recognition (SUSR), the performance often degrades dramatically. This paper presents a combination approach to the SUSR tasks with two phonetic-aware systems: one is the DNN-based i-vector system and the other is our recently proposed subregion-based GMM-UBM system. The former employs phone posteriors to construct an i-vector model in which the shared statistics offers stronger robustness against limited test data, while the latter establishes a phone-dependent GMM-UBM system which represents speaker characteristics with more details. A score-level fusion is implemented to integrate the respective advantages from the two systems. Experimental results show that for the text-independent SUSR task, both the DNN-based i-vector system and the subregion-based GMM-UBM system outperform their respective baselines, and the score-level system combination delivers performance improvement.



page 1

page 2

page 3

page 4

page 6


Deep Speaker Vectors for Semi Text-independent Speaker Verification

Recent research shows that deep neural networks (DNNs) can be used to ex...

Improved Deep Speaker Feature Learning for Text-Dependent Speaker Recognition

A deep learning approach has been proposed recently to derive speaker id...

Deep Neural Network Embedding Learning with High-Order Statistics for Text-Independent Speaker Verification

The x-vector based deep neural network (DNN) embedding systems have demo...

Multi-Task Learning with High-Order Statistics for X-vector based Text-Independent Speaker Verification

The x-vector based deep neural network (DNN) embedding systems have demo...

I-vector Transformation Using Conditional Generative Adversarial Networks for Short Utterance Speaker Verification

I-vector based text-independent speaker verification (SV) systems often ...

Fusion of Embeddings Networks for Robust Combination of Text Dependent and Independent Speaker Recognition

By implicitly recognizing a user based on his/her speech input, speaker ...

BUT VOiCES 2019 System Description

This is a description of our effort in VOiCES 2019 Speaker Recognition c...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

After decades of research, current text-independent speaker recognition (SRE) systems can obtain rather good performance, if the test utterances are sufficiently long [1, 2, 3]. However, if the utterances are short, serious performance degradation is often observed. For instance, Vogt et al. [4] reported that when the test speech was shortened from seconds to seconds, the performance degraded sharply in terms of equal error rate (EER) from 6.34% to 23.89% on a NIST SRE task. The performance degradation seriously limits the application of SRE in practice, since long-duration test would impact user experience significantly, and in many situations it is very difficult, if not possible, to collect sufficient test data, for example in forensic applications. How to improve performance of speaker recognition on short utterances (SUSR) is an open research topic.

A multitude of studies have been conducted in SUSR. For example, in [5], the authors showed that the performance on short utterances can be improved by JFA. This work was extended in [6] which reported that the i-vector model can distill speaker information in a more effective way so it is more suitable for SUSR. In addition, a score-based segment selection technique was proposed in [7]. A relative EER reduction of 22% was reported by the authors on a recognition task where the test utterances were shorter than seconds in length.

We argue that the difficulty associated with text-independent SUSR can be largely attributed to the mismatched distributions of speech data between enrollment and test. Assuming that the enrollment speech is sufficiently long, so the speaker model can be well trained. If the test speech is sufficient as well, the distribution of the test data tends to match the distribution represented by the speaker model; however, if the test speech is short, then only a part of the probability mass represented by the speaker model can be covered by the test speech. For a GMM-UBM system, this is equal to say that only a few Gaussian components of the model are covered by the test data, and therefore the likelihood evaluation is biased. For the i-vector model, since the Gaussian components share some statistics via a single latent variable, the impact of short test speech is partly alleviated. However, the limited data anyway leads to insufficient evaluation of the Baum-Welch statistics, resulting in a less reliable i-vector inference.

A possible solution for the text-independent SUSR problem is to identify the phone content of the speech signal, and then model and evaluate speakers on individual phones. We call this ‘phonetic-aware’ approach. This approach can be regarded as a transfer from a text-independent task to a text-dependent task. The latter is certainly more resilient to short utterances, as has been demonstrated in [8].

Two phonetic-aware approaches have been proposed. One is the subregion model based on the GMM-UBM architecture [9], and the other is the DNN-based i-vector model [10, 11]. Both the two approaches employ an automatic speech recognition (ASR) system to generate phone transcriptions or posteriors for enrollment speech, and then establish a phonetic-aware speaker model based on the transcriptions or posteriors. These two approaches, however, are different in model structure and implementation. The subregion modeling approach builds multiple phone-dependent UBMs and speaker GMMs, and evaluates test speech on the phone-dependent models. The DNN-based i-vector approach, in contrast, keeps the single UBM/GMM framework, but relates each Gaussian component to a phone or a phone state. The former tends to be more flexible when learning speaker characteristics, while the latter is more robust against limited test data, due to the low-dimensional latent variable that is shared among all the Gaussian components. We therefore argue that the two approaches can be combined, so that the respective advantages of the two methods can be integrated.

The rest of the paper is organized as follows: Section 2 discusses some related work, Section 3 presents the subregion model, and Section 4 describes the combination approach. Section 5 presents the experiments, and the entire paper is concluded in Section 6.

2 Related work

The idea of employing phonetic information in speaker recognition has been investigated by previous research studies. For instance, Omar et al. [12]

proposed to derive UBMs from Gaussian components of a GMM-based ASR system, with a K-means clustering approach based on the symmetric KL distance. The DNN-based i-vector method was proposed in  

[10, 11]. In the work, posteriors of senones (context-dependent states) generated by a DNN trained for ASR were used for model training as well as i-vector inference. Note that all these studies focus on relatively long utterances (5-10 seconds), whereas our study in this paper focuses on utterances as short as seconds.

3 Subregion modeling

We briefly describe the subregion model presented by us recently [9]. The basic idea is firstly presented, and then the implementation details are described.

3.1 Acoustic subregions

The conventional GMM-UBM system treats the entire acoustic space as a whole probabilistic space, and computes the likelihood of an input speech signal by a GMM model, formulated as follows:

where denotes the speech signal, and

represents a Gaussian distribution with

as the mean and as the covariance matrix. Further more, indexes the Gaussian component, and indexes the speaker. is a prior distribution on the -th component. Roughly speaking, this model splits the acoustic space into a number of subregions, and each subregion is modelled by a Gaussian distribution.

There are at least three potential problems with this model: (1) the subregion splitting is based on unsupervised clustering (via the EM algorithm [13]), so it is not necessarily meaningful in phonetic; (2) each subregion is modeled by a Gaussian, which seems too simple; (3) the priors over the subregions are fixed, independent of .

The subregion model was proposed to solve these problems. Firstly, the acoustic space is split into subregions that roughly correspond to phonetic units (e.g., phones); secondly, each subregion is modelled by a GMM instead of a single Gaussian; thirdly, the weight for each subregion is based on the posterior instead of the prior . This is formulated as follows:


indexes the Gaussians within a subregion GMM. A key component of this model is the posterior probability

, which is not a pre-trained constant value, but an assignment of each signal to the subregions. In our study, this quantity is generated by an ASR system.

3.2 Speech units

The inventory of speech units varies for different languages. In Chinese, the language focused in this paper, Initials and Finals (IF) are the most commonly used [14]. Roughly speaking, Initials correspond to consonants, and Finals correspond to vowels and nasals. Among the IFs, Finals are recognized to convey more speaker related information [15, 16], and therefore are used as the speech units in this study.

Using Finals to train the subregion model is not very practical, because there is a large number of Finals, and most Finals can only find limited data in both training and test. A possible solution is to cluster similar units together and build subregion models based on the resulting speech unit classes. In this study, we develop a vector quantization (VQ) method based on the K-means algorithm to conduct the clustering.

3.3 Subregion modeling based on speech unit classes

Denote the speech unit classes (Final clusters) by SUC-,, a subregion UBM can be trained for each SUC- with the training data that are aligned to the Finals in SUC- by the ASR system. The subregion UBM of class SUC- is denoted by . The speaker-dependent subregion GMMs can be trained based on the subregion UBMs, using the enrollment data that have been aligned to the Finals of each cluster.

Once the speaker-dependent subregion GMMs are trained, a test utterance can be scored on each subregion. Suppose a test utterance contains Finals according to the decoding result of speech recognition, and denote the speech unit class of the -th Final by . Further denote the speech segment of this unit by , and its length is . The score of is measured by the log likelihood ratio between the subregion speaker-dependent GMM and the subregion UBM , where denotes the speaker. This is formulated by:

Finally, the score of the entire utterance can be computed as the average of the subregion-based scores.

4 System combination

In this section, we first describe the difference between the subregion model and another phonetic-aware method: the DNN based i-vector model. Then the combination system is presented.

4.1 DNN-ivector and subregion model

The DNN-based i-vector approach proposed by Lei and colleagues [10] replaces GMM-based posteriors by DNN-generated posteriors when computing the Baum-Welch statistics for model training and i-vector inference. The DNN model is trained for speech recognition, so the output targets correspond to phones or states. This essentially builds a UBM and speaker GMMs where the Gaussian components correspond to phones or states. This is quite similar as the subregion model, though the model structures of the two models are different. On one hand, the subregion model builds GMMs for each subregion, while the DNN-based i-vector approach still assumes Gaussian for each subregion. From this aspect, the subregion model tends to be more flexible and represents speaker characteristics with more details. On the other hand, the subregions in the subregion modeling are relatively independent, whereas the subregions in the DNN-ivector model share statistics via the latent variable (i-vector). This sharing may lead to more strong robustness against limited test data.

4.2 Score-level system combination

Due to the difference of the two phonetic-aware models and their prospective advantages, it is reasonable to combine them together. The combination system involves three components. Firstly, a DNN model for ASR is trained and used to generate the phonetic information: phone posteriors and phone alignments. Secondly, the phone posteriors are used to train the DNN-based i-vector model, and the phone alignments are used to build the subregion model. Thirdly, when scoring a test speech, the scores derived from the DNN-ivector system and the subregion GMM-UBM system are averaged to make the final decision. Fig. 1 illustrates the system framework.

Figure 1: The diagram of the score-level system combination.

5 Experiments

5.1 Database

5.1.1 Database for evaluation (SUD12)

There is not a standard database for performance evaluation on text-independent SUSR tasks. Therefore, we firstly designed and recorded a database that is suitable for SUSR research and published it for research usage111 The database was named as “SUD12” [9] [17], and was designed in the principle to guarantee sufficient IF coverage. In order to focus on short utterances and exclude other factors such as the channel and emotion, the recording was conducted in the same room and with the same microphone, and the reading style was neutral. The database consists of male speakers and female speakers, and all the utterances are in standard Chinese. For each speaker, there are Chinese sentences, each of which contains Chinese characters. The sampling rate is kHz with -bits precision.

The enrollment database involves all the speakers. For each speaker, the effective speech signals for enrollment is about seconds. The test database consists of speakers, and each speaker involves - short utterances that cover all the Finals. The length of each utterance is not more than seconds and mostly as short as seconds. With the test database, target trials and non-target trials are defined for performance evaluation.

5.1.2 Database for UBM training (863DB)

The speech data used to train the UBMs, subregion UBMs and T-matrix were chosen from the 863 Chinese speech corpus. The 863 database was well designed to cover all the Chinese IFs, so it is particularly suitable to train subregion UBMs based on Final classes. All the recordings are at a sampling rate of kHz, and the sample precision is bits. In this study, we choose hours of speech data and denote the database by 863DB.

5.2 Experimental conditions

The Kaldi toolkit [18] was used to conduct the experiments. Following the standard recipe of SRE08, the acoustic feature was the conventional -dimensional Mel frequency cepstral coefficients (MFCCs), which involved -dimensional static components plus the first and second order derivatives. The frame size was ms and the frame shift was

ms. Besides, a simple energy-based voice activity detection (VAD) was performed before the feature extraction.

The ASR system used to generate the phone alignment was a large-scale DNN-HMM hybrid system. The system was trained using Kaldi following the WSJ S5 recipe. The feature used was 40-dimensional Fbanks. The basic features were spliced by a window of frames, and an LDA (linear discriminative analysis) transform was applied to reduce the dimensionality to . The DNN structure involved hidden layers, each containing hidden units. The output layer contained units, corresponding to the number of GMM senones. The DNN was trained with hours of speech signals, and the decoding employed a powerful -gram language model trained on TB text data.

We chose the conventional GMM-UBM approach to construct the baseline SUSR system. The UBM consisted of Gaussian components and was trained with the 863DB. The SUD12 was employed to conduct the evaluation. With the enrollment data, the speaker GMMs were derived from the UBM by MAP, where the MAP adaptation factor was optimized so that the EER on the test set was the best. For comparison, a GMM-based i-vector system was also constructed. The training was based on the same UBM model as the GMM-UBM system, and the dimensionality of the i-vector was .

For the DNN-based i-vector system, the DNN model was trained following the same procedure as the one used for the ASR system, but with less number of senones. In our experiments, the number was , comparable to the number of Gaussian components of the GMM-UBM system. The dimensionality of the DNN-based i-vectors was set to .

5.3 Basic results

We first investigated the subregion model based on speech unit classes. For this model, the number of speech unit classes need to be defined before hand. In our experiments, we observed that either too small or too large clustering numbers lead to suboptimal performance, and the optimal setting in our experiment was =6 [9]. Table I shows the derived unit classes. It can be seen that the resultant clusters are intuitively reasonable.

Class Speech Units
1 a, ao, an, ang, ia, iao, ua
2 e, ei, ai, i, ie, uei, iii
3 iou, ou, u, ong, uo, o
4 v, vn, ve, van, er
5 en, ian, uan, uen, uai, in, ii, ing
6 eng, iang, iong, uang, ueng
Table 1: Speech unit classes derived by k-means clustering.

The results in terms of EER are presented in Table II, where ‘GMM-UBM’ is the GMM-UBM baseline system, ‘SBM-DD’ denotes the subregion modeling system (=6). ‘GMM i-vector’ denotes the traditional GMM-based i-vector system with cosine distance metric, and ‘DNN i-vector’ denotes the DNN-based i-vector system with cosine distance metric.

We first observe that both the subregion modeling system and the DNN-based i-vector system outperform their relative baselines (‘GMM-UBM’ and ‘GMM-based i-vector’) in a significant way. This confirms the effectiveness of the two phonetic-aware methods. Besides, it can be seen that the GMM-UBM baseline outperforms the two i-vector systems, but after the probabilistic linear discriminant analysis (PLDA) [19] is employed, the i-vector system is improved and outperforms the GMM-UBM system.

System EER (%)
GMM-UBM 28.97
SBM-DD 22.74
GMM i-vector 39.91
DNN i-vector 29.61
DNN i-vector + PLDA 19.16
Combination system 17.43
Table 2: Performance of phonetic-aware methods

5.4 System combination

We combine the ‘DNN i-vector + PLDA’ system and the ‘SBM-DD’ system by a linear score fusion: , where

is the interpolation factor. Fig. 

2 presents the performance with various . It clearly shows that the system combination leads to better performance than each individual system. Fig. 2 shows that = is a good choice. Table II has shown the results of the combination system with this configuration.

Figure 2: Performance of score-level system combination with the interpolation factor .

6 Conclusions

This paper presents a combination system to deal with short utterances in text-independent speaker recognition. This system combines two phonetic-aware methods: one is the DNN-based i-vector system and the other is the subregion-based GMM-UBM system. The experimental results show that both the DNN-based i-vector system and the subregion-based GMM-UBM system outperforms their respective baselines, and a simple score fusion leads to the best performance we have obtained so far. The strategy presented in this paper has been verified in and applied to the Mobile banking of CCB (China Construction Bank) and has been achieving good performance. Future work involves combination of feature-based and model-based compensations for short utterances, and investigation on phone-discriminative methods.


This work was supported by the National Natural Science Foundation of China under Grant No. 61371136 and No. 61271389, it was also supported by the National Basic Research Program (973 Program) of China under Grant No. 2013CB329302.


  • [1] F. Bimbot, J.-F. Bonastre, C. Fredouille, G. Gravier, I. Magrin-Chagnolleau, S. Meignier, T. Merlin, J. Ortega-García, D. Petrovska-Delacrétaz, and D. A. Reynolds, “A tutorial on text-independent speaker verification,” EURASIP Journal on Applied Signal Processing, vol. 2004, pp. 430–451, 2004.
  • [2] T. Kinnunen and H. Li, “An overview of text-independent speaker recognition: From features to supervectors,” Speech communication, vol. 52, no. 1, pp. 12–40, 2010.
  • [3] C. S. Greenberg, V. M. Stanford, A. F. Martin, M. Yadagiri, G. R. Doddington, J. J. Godfrey, and J. Hernandez-Cordero, “The 2012 nist speaker recognition evaluation,” in Proc. INTERSPEECH’13, 2013, pp. 1971–1975.
  • [4] R. Vogt, S. Sridharan, and M. Mason, “Making confident speaker verification decisions with minimal speech,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 6, pp. 1182–1192, 2010.
  • [5] R. J. Vogt, C. J. Lustri, and S. Sridharan, “Factor analysis modelling for speaker verification with short utterances,” in The Speaker and Language Recognition Workshop.   IEEE, 2008.
  • [6] A. Kanagasundaram, R. Vogt, D. B. Dean, S. Sridharan, and M. W. Mason, “i-vector based speaker recognition on short utterances,” in Proceedings of the 12th Annual Conference of the International Speech Communication Association.   International Speech Communication Association (ISCA), 2011, pp. 2341–2344.
  • [7] M. Nosratighods, E. Ambikairajah, J. Epps, and M. J. Carey, “A segment selection technique for speaker verification,” Speech Communication, vol. 52, no. 9, pp. 753–761, 2010.
  • [8] A. Larcher, K.-A. Lee, B. Ma, and H. Li, “Rsr2015: Database for text-dependent speaker verification using multiple pass-phrases,” in Proc. INTERSPEECH’12, 2012.
  • [9] L. Li, D. Wang, C. Zhang, and T. Z. Zheng, “Improving short utterance speaker recognition by modeling speech unit classes,” IEEE Transactions on Audio, Speech, and Language Processing, vol. DOI: 10.1109/TASLP.2016.2544660, 2016.
  • [10]

    Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in

    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.   IEEE, 2014, pp. 1695–1699.
  • [11] P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and J. Alam, “Deep neural networks for extracting baum-welch statistics for speaker recognition,” in Odyseey’2014.   Odyssey, 2014.
  • [12] M. K. Omar and J. W. Pelecanos, “Training universal background models for speaker recognition.” in Odyssey’2010.   Odyssey, 2010.
  • [13]

    T. K. Moon, “The expectation-maximization algorithm,”

    Signal processing magazine, IEEE, vol. 13, no. 6, pp. 47–60, 1996.
  • [14] J.-Y. Zhang, T. F. Zheng, J. Li, C.-H. Luo, and G.-L. Zhang, “Improved context-dependent acoustic modeling for continuous chinese speech recognition.” in Proc. INTERSPEECH’01, 2001, pp. 1617–1620.
  • [15] H. Beigi, Fundamentals of speaker recognition.   Springer, 2011.
  • [16] C. Gong, Research on Highly Distinguishable Speech Selection Methods in Speaker Recognition.   Tsinghua University, 2014.
  • [17] C. Zhang, X.-J. Wu, T. F. Zheng, L.-L. Wang, and C. Yin, “A k-phoneme-class based multi-model method for short utterance speaker recognition,” in Asia-Pacific Signal & Information Processing Association Annual Summit and Conference (APSIPA ASC), vol. 20, no. 12, 2012, pp. 1–4.
  • [18] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, and K. Vesely, “The kaldi speech recognition toolkit,” in IEEE 2011 Workshop on Automatic Speech Recognition and Understanding.   IEEE Signal Processing Society, Dec. 2011, IEEE Catalog No.: CFP11SRW-USB.
  • [19] S. J. Prince and J. H. Elder, “Probabilistic linear discriminant analysis for inferences about identity,” in ICCV’07.   IEEE, 2007, pp. 1–8.