Deep Speaker Verification: Do We Need End to End?

06/22/2017 ∙ by Dong Wang, et al. ∙ 0

End-to-end learning treats the entire system as a whole adaptable black box, which, if sufficient data are available, may learn a system that works very well for the target task. This principle has recently been applied to several prototype research on speaker verification (SV), where the feature learning and classifier are learned together with an objective function that is consistent with the evaluation metric. An opposite approach to end-to-end is feature learning, which firstly trains a feature learning model, and then constructs a back-end classifier separately to perform SV. Recently, both approaches achieved significant performance gains on SV, mainly attributed to the smart utilization of deep neural networks. However, the two approaches have not been carefully compared, and their respective advantages have not been well discussed. In this paper, we compare the end-to-end and feature learning approaches on a text-independent SV task. Our experiments on a dataset sampled from the Fisher database and involving 5,000 speakers demonstrated that the feature learning approach outperformed the end-to-end approach. This is a strong support for the feature learning approach, at least with data and computation resources similar to ours.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Speaker verification (SV) is an important biometric recognition technology and has gained great popularity in a wide range of applications, such as access control, transaction authentication, forensics and personalization. After decades of research, SV has gained significant performance improvement, and has been deployed in some practical applications [1, 2, 3, 4]. However, SV is still a very challenging task, mainly attributed to the large uncertainty caused by the complex convolution of various speech factors, especially phone content and channel [5].

Most of the existing successful SV approaches rely on probabilistic models to factorize speech signals into factors related to speakers and other variations, especially the phone content. A classical probabilistic model is the Gaussian mixture model-universal background model (GMM-UBM) 

[6], where the speaker factor is assumed to be an additive component to the phone variation (represented by Gaussian components). This model was extended to a low-rank formulation, leading to the joint factor analysis (JFA) model [7]

and its ‘simplified’ version, the famous i-vector model 

[8]. To further improve speaker-related discrimination, various discriminative back-end models have been proposed, e.g., metric learning [9], linear discriminant analysis (LDA) [8] and its probabilistic version, PLDA [10]. A DNN-based i-vector model was also proposed [11, 12], where a phonetic deep neural network (DNN) is used to enhance the factorization for speaker factors by providing phonetic information.

Recently, the deep learning approach has gained much attention in the SV research. Different from the probabilistic methods, these deep SV methods utilize various DNN structures to learn speaker features. This can be regarded as a neural-based speech factorization, which is deep, non-linear and non-Gaussian. The initial work towards the deep SV was proposed by Ehsan and colleagues 

[13]. They constructed a DNN model with speakers in the training set as the targets. The frame-level features were read from the activations of the last hidden layer, and the utterance-level representations (called ‘d-vector’) were obtained by averaging over the frame-level features. In evaluation, the decision score was computed as a simple cosine distance between the d-vectors of the enrollment utterance and the test utterance. This preliminary results triggered much interest on deep SV. Many researchers quickly noticed the inferior performance of this approach compared to the counterpart i-vector model might be caused by the naive back-end model, i.e., the frame averaging and the cosine-based scoring. An ‘end-to-end approach’ was developed that learns the back-end model together with the feature learning [14, 15, 16, 17].

Another approach focuses on learning speaker features, leaving the back-end model as a separate component. The idea is that if the feature learning is strong enough, the back-end model issue will be naturally solved. Our group followed this direction, and found that a simple CT-DNN structure can learn speaker features very well [18].

These two deep SV approach: end-to-end and feature learning, however, have not been carefully compared. In this paper, we present a comparative experimental study for the two deep SV approaches. Based on a training database consisting of speakers, we found the feature learning approach performs consistently better than the end-to-end approach. This result is a strong support for the feature learning approach, at least in conditions similar to our experiment.

The rest of this paper is organized as follows. Section 3.2 presents the two deep SV learning approaches in detail. The comparative experiments are presented in Section 3, and Section 4 concludes the paper.

2 Deep speaker verification models

This section presents the model structures of the feature learning approach and the end-to-end approach used in our study. The former was proposed by our group [18], and the latter was proposed by Snyder et al. [16].

2.1 Feature learning model

The DNN structure of the feature learning system is illustrated in Fig. 1. It consists of a convolutional (CN) component and a time-delay (TD) component, connected by a bottleneck layer. The frame-level speech features are read from the last hidden layer (feature layer). More details can be found in [18].

Figure 1:

The DNN structure of the deep feature learning system 

[18].
Figure 2: The DNN structure of the end-to-end system [16].

To perform SV, a simple back-end model is constructed, which consists of a simple average pooling that averages the frame-level speaker features to utterance-level representations, denoted by ‘d-vectors’, and a scoring scheme based on the cosine distance between the d-vectors of the enrollment and test utterances.

2.2 End-to-end model

The end-to-end DNN model proposed Snyder et al. [16] is used in our study. A particular reason we choose this model is that it has been tested on text-independent tasks. The model structure is illustrated in Fig. 2. The input is a pair of utterances (in the form of feature sequences) sampled from the training data, where the two utterances may be from the same speaker or from different speakers, labelled by and respectively.

The DNN structure consists of an embedding component and a scoring component (back-end). The embedding component converts an input utterance to a speaker embedding. The utterance is first propagated through three time-delay network-in-network (NIN) layers [19]

. Each NIN component is composed of a stack of three rectified linear units connected by affine transformations, and maps the input that is

-dimensional to a -dimensional space, then projects to the output to a

-dimensional space. The output of the third NIN layer is aggregated by a temporal pooling layer, by which the statistics of the input utterance are derived. Finally, the statistics are propagated to another NIN layer and a linear affine layer, producing the speaker embedding. Note that in this work, we only use the mean vector as the statistics as it performed the best in our experiments.

The back-end scoring component estimates the probability that the two input utterances, represented by their embeddings, belong to the same speaker. It is essentially a bi-linear projection followed by a logistic sigmoid function, formulated as follows:

where

(1)

By this DNN structure, the objective function of the training is simply the cross entropy between the prediction of the network and the ground-truth of the training samples (pairs of utterances), formulated as:

(2)

where P and P represent the set of same-speaker pairs and different-speaker pairs, respectively. Since there are much more pairs in P than P, a constant hyper-parameter is introduced to balance the contribution of each set.

2.3 Comparison of feature learning and end-to-end

The two deep SV approaches are fundamentally different from multiple aspects. A thorough comparison helps understand their respective advantages.

  • Difference in model structure. The end-to-end model involves both speaker embedding (front-end) and scoring (back-end), and the two components are trained jointly as an integrated network. The feature learning model, in contrast, involves only the front-end.

  • Difference in training objectives. The training objective of the end-to-end model is to directly determine if a pair of utterances are from the same speaker or different speakers. The feature learning model, instead, aims to discriminate the speakers in the training set. Obviously, the end-to-end objective is more consistent with the SV task.

  • Difference in training scheme. The end-to-end model is trained in a pair-wised style, which heavily relies on the quality and quantity of the sampled pairs. The feature learning model is trained in a one-hot style, for which a single training example triggers much stronger error signal through the softmax function. This suggests that training the feature learning model could be easier than training the end-to-end model, and requires less data and computation.

  • Difference in generalizability. The end-to-end approach is purely task-oriented, and the resultant system can perform SV only; the feature learning approach, instead, learns intermediate representations that can be used in a broad range of applications, such as speech signal factorization [20], speaker-dependent text-to-speech synthesis.

As a summary, the end-to-end model is theoretically optimal for SV, but the training could be difficult. The feature learning approach is opposite. Which approach is better in practical usage is therefore an open question.

3 Experiments

In this section, we first present the database and settings of different systems, then report the performance results. Experiments were also conducted to analyze the factors that caused the different performance with the two deep SV systems. All the experiments were conducted with the Kaldi toolkit [21].

3.1 Database

Our experiments were conducted with the Fisher database. The training set and the evaluation set are presented as follows.

  • Training set: It consists of male and female speakers, with utterances randomly selected from the Fisher database, and each speaker has about utterances and totally seconds in length. This dataset was used for training the i-vector system, LDA model, PLDA model, and the DNNs of two deep SV systems.

  • Evaluation set: It consists of male and female speakers randomly selected from the Fisher database. There is no overlap between the speakers of the training set and the evaluation set.

We set two test conditions: a short-enrollment condition (C(4-4)) and a long-enrollment condition (C(40-4)), for which the duration of the enrollment utterances is seconds and seconds, respectively. More details of the two test conditions are presented in Table 1. The trials in the test are either female-female or male-male, and the results are reported on the pool of all the trials.

Test condition C(4-4) C(40-4)
No. of Enrollment Utt. 82k 10k
No. of Test Utt. 82k 73k
Avg. duration of Enrollment Utt. 4s 40s
Avg. duration of Test Utt. 4s 4s
No. of Target Trials 3.5k 73k
No. of Non-target Trials 82M 36M
Table 1: Data profile of the test conditions.

3.2 Model settings

3.2.1 I-vector system

The i-vector system was built as a baseline for comparison. The raw feature involved -dimensional MFCCs plus the log energy. This raw feature was augmented by its first- and second-order derivatives, resulting in a 60-dimensional feature vector. This feature was used by the i-vector model. The UBM was composed of Gaussian components, and the dimensionality of the i-vector space was . The dimensionality of the LDA projection space was set to . Prior to the PLDA scoring, the i-vectors were centered and length normalized. The entire system was trained using the Kaldi SRE08 recipe.

3.2.2 Deep feature learning system

The deep feature learning system was constructed based on the DNN structure shown in Fig. 1. The input feature was 40-dimensional Fbanks, with a symmetric -frame window to splice the neighboring frames, resulting in frames in total. With two time-delay hidden layers, the size of the effective context window is frames. The number of output units was , corresponding to the number of speakers in the training data. The speaker features were extracted from the last hidden layer (the feature layer in Fig. 1), and the utterance-level d-vectors were derived by averaging the frame-level features. The three scoring metrics used by the i-vector system were also used by the d-vector system, including cosine distance, LDA and PLDA.

3.2.3 End-to-end system

The end-to-end system was constructed based on the DNN architecture shown in Fig. 2. The input feature was 40-dimensional Fbanks, with a symmetric -frame window to splice the neighboring frames, resulting in frames in total. With three time-delay hidden layers, the size of the effective context window is frames. The training samples were organized as pairs of feature chunks, which may be either same-speaker or different-speaker. Each mini-batch involves same-speaker pairs and different-speaker pairs. Limited by the GPU memory, was set to

in our experiment. The number of frames in a feature chunk was a random variable sampled from a log-uniform distribution, ranging from

to . The feature dimensions before and after the temporal pooling are both (note that the statistics produced by the temporal pooling is just the mean vector), and the dimensionality of the speaker embedding was . In evaluation, the enrollment and test utterances were fed to the neural model simultaneously, and then the decision scores were obtained from the output of the network.

We used the recipe published by the author of [16], and tried our best to optimize the system by tuning the configurations, including the trunk size and the batch size. The settings mentioned above are the optimal values we found in the system tuning.

3.3 Experimental results

EER%
Systems Scoring C(4-4) C(40-4)
i-vector Cosine 16.96 4.81
LDA 10.95 3.30
PLDA 8.84 3.39
Deep feature Cosine 10.31 4.01
LDA 7.86 2.39
PLDA 13.01 5.24
End-to-end - 9.85 4.59
Table 2: EER(%) results of the three SV systems.

The results of the three SV systems in terms of equal error rate (EER%) are reported in Table 2. Firstly we compare the three SV systems with their own best configurations, say, i-vector + PLDA, deep feature + LDA. It can be seen that the deep feature system performs the best, and the end-to-end system is inferior compared to the other two systems. The relative better performance of the feature learning system compared to the i-vector system has been reported in our previous paper [18]. The inferior performance of the end-to-end system compared to the i-vector system with limited training data is also consistent with the results reported by Snyder et al. [16], where they found that the end-to-end model did not beat the best i-vector model (i-vector + PLDA) when the training set contained speakers.

Another observation is that the LDA model plays an important role for the feature learning system. At the first glance, this seems not reasonable, as the feature has been speaker discriminative, and the discriminative DNN training should have superseded LDA. However, more careful analysis shows that LDA normalizes the within-class variation, which is important for SV but not the goal of the DNN training. From this perspective, LDA can be regarded as a better back-end model that learns an SV-oriented decision strategy. The importance of an SV back-end model for d-vector systems was also noticed by Heigold et al. [14], who reported that a score normalization (t-norm) is important to improve the performance of a d-vector system, although their study was based on a text-dependent task. T-norm plays a similar role as LDA in normalization of within-class variation.

The probabilistic version of LDA (PLDA), however, does not provide any contribution (actually it hurts the performance). The failure of PLDA has been reported in our previous work [18]. A possible reason is that the mean d-vector of each speaker does not follow a Gaussian prior, so cannot be well modeled by the PLDA model.

These results demonstrated that although the end-to-end model is highly SV-oriented, it is not easy to take full advantage of this model due to the difficulties in model training. In our experiments, we indeed found that the end-to-end training is rather difficult: it requires careful tuning otherwise the training may divergent, and much attention has to be paid on the training pair preparation, e.g., the number of training pairs in each iteration and the number of frames in each training pair. We also found a non-linear dynamics during end-to-end training, i.e., the objective function is stuck at a value for quite a long time, and suddenly drops dramatically. This from another perspective demonstrated the difficulty of the end-to-end training.

4 Conclusions

This paper studied two deep speaker verification models. One is the end-to-end neural model and the other is the deep feature learning model. Our experimental results showed that the two deep speaker models achieved comparable or even better performance than the i-vector/PLDA model. When comparing with each other, we found that the feature learning model performs better than the end-to-end model, although the latter is assumed to be more consistent with the SV task. From these experiments, it seems that the end-to-end learning is not very suitable for SV, at least with data and computation resources similar to our experiment. Lots of questions remain open, e.g., how the two approaches will perform with the training set growing? how to use the respective advantages of the two approaches to construct a more powerful deep speaker model? All need careful investigation.

Acknowledgment

This work was supported by the National Natural Science Foundation of China under Grant No. 61371136 / 61633013 and the National Basic Research Program (973 Program) of China under Grant No. 2013CB329302.

References

  • [1] J. P. Campbell, “Speaker recognition: A tutorial,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1437–1462, 1997.
  • [2] D. A. Reynolds, “An overview of automatic speaker recognition technology,” in Acoustics, speech, and signal processing (ICASSP), 2002 IEEE international conference on, vol. 4.   IEEE, 2002, pp. IV–4072.
  • [3] T. Kinnunen and H. Li, “An overview of text-independent speaker recognition: From features to supervectors,” Speech communication, vol. 52, no. 1, pp. 12–40, 2010.
  • [4] J. H. Hansen and T. Hasan, “Speaker recognition by machines and humans: A tutorial review,” IEEE Signal processing magazine, vol. 32, no. 6, pp. 74–99, 2015.
  • [5] T. F. Zheng, Q. Jin, L. Li, J. Wang, and F. Bie, “An overview of robustness related issues in speaker recognition,” in Asia-Pacific Signal and Information Processing Association, 2014 Annual Summit and Conference (APSIPA).   IEEE, 2014, pp. 1–10.
  • [6] D. Reynolds, T. Quatieri, and R. Dunn, “Speaker verification using adapted gaussian mixture models,” Digital Signal Processing, vol. 10, no. 1, pp. 19–41, 2000.
  • [7] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel, “Joint factor analysis versus eigenchannels in speaker recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, pp. 1435–1447, 2007.
  • [8] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
  • [9] M. Schultz and T. Joachims, “Learning a distance metric from relative comparisons,” in Advances in neural information processing systems, 2004, pp. 41–48.
  • [10] S. Ioffe, “Probabilistic linear discriminant analysis,” Computer Vision ECCV 2006, Springer Berlin Heidelberg, pp. 531–542, 2006.
  • [11] P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and J. Alam, “Deep neural networks for extracting baum-welch statistics for speaker recognition,” Odyssey, 2014.
  • [12] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren, “A novel scheme for speaker recognition using a phonetically-aware deep neural network,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on.   IEEE, 2014, pp. 1695–1699.
  • [13] V. Ehsan, L. Xin, M. Erik, L. M. Ignacio, and G.-D. Javier, “Deep neural networks for small footprint text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, vol. 28, no. 4, 2014, pp. 357–366.
  • [14] G. Heigold, I. Moreno, S. Bengio, and N. Shazeer, “End-to-end text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on.   IEEE, 2016, pp. 5115–5119.
  • [15] S.-X. Zhang, Z. Chen, Y. Zhao, J. Li, and Y. Gong, “End-to-end attention based text-dependent speaker verification,” arXiv preprint arXiv:1701.00562, 2017.
  • [16] D. Snyder, P. Ghahremani, D. Povey, D. Garcia-Romero, Y. Carmiel, and S. Khudanpur, “Deep neural network-based speaker embeddings for end-to-end speaker verification,” in SLT’2016, 2016.
  • [17] C. Li, X. Ma, B. Jiang, X. Li, X. Zhang, X. Liu, Y. Cao, A. Kannan, and Z. Zhu, “Deep speaker: an end-to-end neural speaker embedding system,” arXiv preprint arXiv:1705.02304, 2017.
  • [18] L. Li, Y. Chen, Y. Shi, Z. Tang, and D. Wang, “Deep speaker feature learning for text-independent speaker verification,” arXiv preprint arXiv:1705.03670, 2017.
  • [19] P. Ghahremani, V. Manohar, D. Povey, and S. Khudanpur, “Acoustic modelling from the signal domain using cnns,” Interspeech 2016, pp. 3434–3438, 2016.
  • [20] D. Wang, L. Li, Y. Shi, Y. Chen, and Z. Tang, “Deep factorization for speech signal,” arXiv preprint arXiv:1706.01777, 2017.
  • [21] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding, no. EPFL-CONF-192584.   IEEE Signal Processing Society, 2011.