Automatic speaker verification (ASV) is an important biometric authentication technology. As in most machine learning tasks, a key challenge of ASV is the intermixing of multiple variability factors involved in the speech signal, which leads to great uncertainty when making genuine/imposter decision. In principle, two methods can be employed to address the uncertainty: either by extracting more powerful features which are sensitive to speaker traits but invariant to other variations, or by constructing a statistical model that can describe the uncertainty and promote the speaker factor.
Most of existing successful ASV approaches are model-based. For example, the famous Gaussian mixture model-universal background model (GMM-UBM) framework and the subsequent subspace models, including the joint factor analysis approach 
and the i-vector model
. They are generative models and heavily utilize unsupervised learning. Improvements have been achieved in two directions. The first is to use a discriminative model to boost the discriminant for speakers, e.g., the SVM model for the GMM-UBM approach and the PLDA model for the i-vector approach
. The second is to use supervised learning to produce a better representation for the acoustic space. For example, the DNN-based i-vector method[6, 7]. Almost all the model-based methods use raw features, e.g., the popular Mel frequency cepstral coefficients (MFCC) feature.
In spite of the predominant success of the model-based approach, researchers never stop searching for ‘fundamental’ features for speaker traits. The motivation is two-fold: from the engineering perspective, if a better feature is found, the present complex statistical models can be largely discarded; and from the cognitive perspective, a fundamental feature will help us understand how speaker traits are embedded in speech signals. Driven by these motivations, many researchers put their effort in ‘feature engineering’ in the past several decades, and new features were proposed occasionally, from perspectives of different knowledge domains 
. However, compared to the remarkable achievement with the model-based approach, the reward from the feature engineering is rather marginal. After decades, we find the most useful feature in our hand is still MFCC. Interestingly, the same story was also told in other fields of speech processing, particularly in automatic speech recognition (ASR), until very recently after deep learning involved.
The development of deep learning changed the story. Different from the historic feature engineering methods that design features by human knowledge, deep learning can learn features automatically from vast raw data, usually by a multi-layer structure, e.g., a deep neural network (DNN). By the layer-by-layer processing, task-related information can be preserved and strengthened, while task-irrelevant variations are diminished and removed. This feature learning has been demonstrated to be very successful in ASR, where the learned features have shown to be highly representative for linguistic content and very robust against variations of other factors .
This success of feature learning in ASR has motivated researchers in ASV to learn speaker sensitive features. The primary success was reported by Ehsan et al. on a text-dependent task . They constructed a DNN model with speakers in the training set as the targets. The frame-level features were read from the activations of the last hidden layer, and the utterance-level representations (called ‘d-vector’) were obtained by averaging over frame-level features. In evaluation, the decision score was computed as a simple cosine distance between the d-vectors of the enrollment utterance(s) and the test utterance. The authors reported worse performance with the d-vector system compared to the conventional i-vector baseline, but after combining the two systems, better performance was obtained. This method was further extended by a number of researchers. For example, Heigold et al.  used an LSTM-RNN to learn utterance-level representations directly and reported better performance than the i-vector system on the same text-dependent task when a large database was used (more than speakers). Zhang et al. 
utilized convolutional neural networks (CNN) to learn speaker features and an attention-based model to learn how to make decisions, again on a text-dependent task. Liu et al. used the DNN-learned features to build the conventional i-vector system. Recently, Snydern et al.  migrated the DNN-based approach to text-independent tasks, and reported better performance than the i-vector system when the training data is sufficiently large (102k speakers). All these following-up studies, however, are not purely feature learning: they all involve a complex back-end model, either neural or probabilistic, to gain reasonable performance. This is perfectly fine from the perspective of both research and engineering, but departs from the initial goal of feature learning: we hope to discover a feature that is sufficiently general and discriminative so that it can be employed in a broad range of applications without heavy back-end models. This has been achieved in ASR, but not in speaker verification yet.
In this paper, we present a simple but effective DNN structure that involves two convolutional layers and two time-delayed full-connection layers to learn speaker features. Our experiments demonstrated that this simple model can learn very strong speaker sensitive features, using speech data of only a few thousand of speakers. The learned feature does not require complex back-end models: a simple frame averaging is sufficient to produce a strong utterance-level speaker vector, by which a simple cosine distance is good enough to perform text-independent ASV tasks. These results actually demonstrated that it is possible to discover speaker information from a short-time speech segment ( ms), by only a couple of simple neural propagation.
2 Related work
Our work is a direct extension of the d-vector model presented by Ehsan et al . The extension is two-fold: a CNN/TDNN structure that emphasizes on temporal-frequency filtering, more resemble to the traditional feature engineering; an experiment on a text-independent task demonstrated that the learned feature is independent of linguistic content and highly speaker sensitive.
This work is different from most of the existing neural-based ASV methods. For example, the RNN-based utterance-level representation learning  is attractive, but the RNN pooling shifts the focus to the entire sentence, rather than frame-level feature learning. The end-to-end neural models proposed by Snyder  and Zhang 
both involve a back-end classifier, which weakens the feature learning component: it is unknown whether the speaker-discriminant information is learned by the classifier or by the feature extractor. Therefore, the features are not necessarily speaker discriminative and are less generalizable, as the feature extractor depends on the classifier.
This work is also different from the methods that combine DNN features and statistical models. In these methods, some speaker information is learned in the feature, but not sufficient. Therefore, the feature is still primary and thus a statistical model has to be used to address the inherent uncertainty. For example, Liu et al.  used an ASR-ASV multi-task DNN to produce frame-level features and substituted them for MFCC to construct GMM-UBM and i-vector systems. Yao et al.  proposed a similar approach, though they used ASR-oriented features to train GMMs for splitting the acoustic space, and the original ASV-oriented features as the acoustic feature to construct the i-vector model.
An implication behind the above methods is that the speaker feature learning is still imperfect: the speaker traits have not been fully extracted and other irrelevant variations still exist, and therefore some back-end models have to be utilized to improve the discriminative power. In this paper, we will show that a better network design can significantly improve the quality of the feature learning, hence greatly alleviating the reliance on a back-end model.
3 CT-DNN for feature learning
This section presents our DNN structure for speaker sensitive feature learning. This structure is an extension to the model proposed in , by using convolutional layers to extract local discriminative patterns from the temporal-frequency space, and time-delayed layers to increase the effective temporal context for each frame. We call this structure as CT-DNN.
Figure 1 illustrates the CT-DNN structure used in this work. It consists of a convolutional (CN) component and a time-delay (TD) component, connected by a bottleneck layer consisting of
hidden units. The convolutional component involves two CN layers, each followed by a max pooling. This component is used to learn local patterns that are useful in representing speaker traits. The TD component involves two TD layers, each followed by a P-norm layer. This component is used to extend the temporal context. The settings for the two components, including the patch size, the number of feature maps, the time-delay window, the group size of the P-norm, have been shown in Figure1. A simple calculation shows that with these settings, the size of the effective context window is frames. The output of the P-norm layer is projected to a feature layer consisting of units, which is connected to the output layer whose units correspond to the speakers in the training data.
This CT-DNN model is rather simple and can be easily trained. In our study, the natural stochastic gradient descent (NSGD) algorithm was employed to conduct the optimization. Once the DNN model has been trained, the speaker feature can be read from the feature layer, i.e., the last hidden layer of the model. As in , the utterance-level representation of a speech segment can be simply derived by averaging the speaker features of all the frames of the speech segment.
Following the name convention of the previous work [10, 17], the utterance-level representations derived from the CT-DNN are called d-vectors. During test, the d-vectors of the test and enrollment utterances are produced respectively. A simple cosine distance between these two vectors can be computed and used as the decision score for the ASV task. Similar with i-vectors, some simple normalization methods can be employed, such as linear discriminant analysis (LDA) and probabilistic LDA (PLDA).
In this section, we first present the database used in the experiments, and then report the results with the i-vector and the d-vector systems. All the experiments were conducted with the Kaldi toolkit .
The Fisher database was used in our experiments. The training data and the test data are presented as follows.
Training set: It consists of male and female speakers, with utterances randomly selected from the Fisher database, and each speaker has about seconds speech segments. This dataset was used for training the UBM, the T-matrix, and the LDA/PLDA models of the i-vector system, and the CT-DNN model of the d-vector system.
Evaluation set: It consists of male and female speakers randomly selected from the Fisher database. There is no overlap between the speakers of the training set and the evaluation set. For each speaker, utterances are used for enrollment and the rest for test.
The test were conducted in conditions, each with a different setting in the length of the enrollment and the test utterances. The conditions are shown in Table 1. All the test conditions involve pooled male and female trials. Gender-dependent test exhibited the same trend, so we just report the results with the pooled data.
4.2 Model settings
We built an i-vector system as the baseline. The raw feature involves -dimensional MFCCs plus the log energy. This raw feature is augmented by its first and second order derivatives, resulting in a 60-dimensional feature vector. This MFCC feature was used by the i-vector model. The UBM was composed of Gaussian components, and the dimensionality of the i-vector space was . The dimensionality of the LDA projection space was set to . The entire system was trained using the Kaldi SRE08 recipe.
For the d-vector system, the architecture was based on Figure 1. The input feature was 40-dimensional Fbanks, with a symmetric -frame window to splice the neighboring frames, resulting in frames in total. The number of output units was , corresponding to the number of speakers in the training data. The speaker features were extracted from the last hidden layer (the feature layer in Figure 1), and the utterance-level d-vectors were derived by averaging the frame-level features. The transform/scoring methods used for the i-vector system were also used for the d-vector system during the test, including cosine distance, LDA and PLDA. The Kaldi recipe to reproduce our results has been published online111http://data.cslt.org.
4.3 Main results
The results of the i-vector system and the d-vector system in terms of equal error rate (EER%) are reported in Table 2. The two systems were trained with the entire training set, and the results are reported for different conditions.
It can be observed that for both the two systems, improving the length of the test utterances always improves performance. However, it seems that the performance improvement for the i-vector system is more significant than the d-vector system. This is understandable as the i-vector system relies on the statistical pattern of the features to build speaker vectors, so more speech frames will help. In contrast, the d-vector system uses a simple average of the features to represent a speaker, so the contribution of more speech frames is marginal.
The most interesting observation is the clear advantage of the d-vector system in the C(30-3) condition, where the test utterances are short. Because the d-vector system does not use any powerful back-end model, this advantage on short utterances implies that the features learned by the CT-DNN model is rather powerful. The advantage of neural-based models on short utterances has been partially observed from DET curves by Snyder et al. , and our results give a more clear evidence to this trend.
Another observation is that the LDA approach improves the d-vector system, while the PLDA approach does not. The contribution of LDA suggests that there is still some non-speaker variation within the learned feature, which needs more investigation. The failure of PLDA with d-vectors is also a known issue in our previous work . A possible reason is that the residual noise within d-vectors is not Gaussian, and so cannot be well modeled by the PLDA model. Again, more investigation is under going.
4.4 Training data size
In order to investigate the data dependency of the feature learning approach, we change the size of the training data by selecting different numbers of speakers. The results of the best i-vector system (i-vector + PLDA) and the best d-vector system (d-vector + LDA) are reported in each of the four test conditions. The results are shown in Figure 2, where each picture presents a particular test condition. It can be seen that in all the test conditions, for both the i-vector and d-vector systems, better performance is obtained with more training data, but the i-vector system seems benefit more from big data. This is a little different from our experience that deep neural models need more data than probabilistic models. It is also different from the observation in , where the d-vector system obtained more performance improvement than the i-vector system when the number of speakers got very large (102k).
We attribute the relatively less data sensitivity of the d-vector system to two factors: (1) the compact CN and TD layers in the CT-DNN structure require less training data; (2) the major restriction on performance of the d-vector system is not the model training, but the simple average back-end. In the C(30-3) condition where the test utterances are short, the impact of feature average is less significant, and so the true quality of the learned feature is exhibited, leading to a clear performance improvement when more training data are used. In other conditions, however, the improvement obtained by the big data training may have largely been masked by the average back-end.
4.5 Feature discrimination
To check the quality of the learned speaker feature, we use t-SNE  to draw some feature samples from speakers. The samples are selected in two ways: (a) randomly sample from all the speech frames of the speaker; (b) choose a particular utterance. The results are presented in Figure 3. It can be seen that the learned features are very discriminative for speakers, but there is still some variation caused by linguistic content, as seen from the plot (b).
A more quantitative test for the feature quality is to examine the extreme case where the test speech is only a few frames. Let’s start from frames, which is actually the effective context size of the CT-DNN, so only a single feature is produced. More frames will produce more features. Table 3 and Table 4 present the results, where the length of the enrollment utterances is seconds and seconds, respectively. The i-vector and d-vector models used in this experiment were trained with the entire training data.
The results with the d-vector system are striking: with only one feature ( frames, or equally ms), the EER can reach if the enrollment speech is only seconds, and when the enrollment speech is seconds! In comparison, the i-vector system largely fails in these conditions.
These results demonstrated that the CT-DNN model has learned a highly discriminative feature. From another perspective, these results also demonstrated that the speaker trait is largely a deterministic short-time property rather than a long-time distributional pattern, and therefore can be extracted from just dozens of speech frames. This indicates that feature learning may be a more reasonable approach than the existing model-based approaches that rely on statistical patterns of raw features.
|Systems||Metric||20 frames||50 frames||100 frames|
|Systems||Metric||20 frames||50 frames||100 frames|
This paper presented a CT-DNN model to learn speaker sensitive features. Our experiments showed that the learned feature is highly discriminative and can be used to achieve impressive performance when the test utterances are short. This result has far-reaching implications for both research and engineering: on one hand, it means that the speaker trait is a kind of short-time pattern and so should be extracted by short-time analysis (including neural learning) rather than long-time probabilistic modeling; on the other hand, our result suggests that the feature learning approach could/should be used when the test utterances are short, a condition that many practical applications are interested in. Lots of work remains, e.g., How the feature should be modeled in a simple way, if PLDA does not work? How the neural-learned features generalize from one task (or language) to another? How to involve auxiliary information to boost the model quality? All need careful investigation.
-  D. Reynolds, T. Quatieri, and R. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital Signal Processing, vol. 10, no. 1, pp. 19–41, 2000.
-  P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, pp. 1435–1447, 2007.
-  N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
W. Campbell, D. Sturim, and D. Reynolds, “Support vector machines using gmm supervectors for speaker verification,”Signal Processing Letters, IEEE, vol. 13, no. 5, pp. 308–311, 2006.
-  S. Ioffe, “Probabilistic linear discriminant analysis,” Computer Vision ECCV 2006, Springer Berlin Heidelberg, pp. 531–542, 2006.
-  P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and J. Alam, “Deep neural networks for extracting baum-welch statistics for speaker recognition,” Odyssey, 2014.
-  Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 1695–1699.
-  T. Kinnunen and H. Li, “An overview of text-independent speaker recognition: From features to supervectors,” Speech communication, vol. 52, no. 1, pp. 12–40, 2010.
-  G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.
-  V. Ehsan, L. Xin, M. Erik, L. M. Ignacio, and G.-D. Javier, “Deep neural networks for small footprint text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, vol. 28, no. 4, 2014, pp. 357–366.
-  G. Heigold, I. Moreno, S. Bengio, and N. Shazeer, “End-to-end text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 5115–5119.
-  S.-X. Zhang, Z. Chen, Y. Zhao, J. Li, and Y. Gong, “End-to-end attention based text-dependent speaker verification,” arXiv preprint arXiv:1701.00562, 2017.
-  Y. Liu, Y. Qian, N. Chen, T. Fu, Y. Zhang, and K. Yu, “Deep feature for text-dependent speaker verification,” Speech Communication, vol. 73, pp. 1–13, 2015.
-  D. Snyder, P. Ghahremani, D. Povey, D. Garcia-Romero, Y. Carmiel, and S. Khudanpur, “Deep neural network-based speaker embeddings for end-to-end speaker verification,” in SLT’2016, 2016.
-  T. Yao, C. Meng, H. Liang, and L. Jia, “Speaker recognition system based on deep neural networks and bottleneck features,” Journal of Tsinghua University (Science and Technology), vol. 56, no. 11, pp. 1143–1148, 2016.
-  D. Povey, X. Zhang, and S. Khudanpur, “Parallel training of dnns with natural gradient and parameter averaging,” arXiv preprint arXiv:1410.7455, 2014.
-  L. Li, Y. Lin, Z. Zhang, and D. Wang, “Improved deep speaker feature learning for text-dependent speaker recognition,” in Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2015 Asia-Pacific. IEEE, 2015, pp. 426–429.
-  D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding, no. EPFL-CONF-192584. IEEE Signal Processing Society, 2011.
-  L. v. d. Maaten and G. Hinton, “Visualizing data using t-sne,” Machine Learning Research, 2008.