VAE-based Domain Adaptation for Speaker Verification

08/27/2019 ∙ by Xueyi Wang, et al. ∙ Tsinghua University 0

Deep speaker embedding has achieved satisfactory performance in speaker verification. By enforcing the neural model to discriminate the speakers in the training set, deep speaker embedding (called `x-vectors`) can be derived from the hidden layers. Despite its good performance, the present embedding model is highly domain sensitive, which means that it often works well in domains whose acoustic condition matches that of the training data (in-domain), but degrades in mismatched domains (out-of-domain). In this paper, we present a domain adaptation approach based on Variational Auto-Encoder (VAE). This model transforms x-vectors to a regularized latent space; within this latent space, a small amount of data from the target domain is sufficient to accomplish the adaptation. Our experiments demonstrated that by this VAE-adaptation approach, speaker embeddings can be easily transformed to the target domain, leading to noticeable performance improvement.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic speaker verification (ASV) is an important biometric authentication technology and has found a broad range of applications. Conventional ASV methods are based on statistical models [18, 8, 3]

. Perhaps the most famous statistical model is the Gaussian mixture model

universal background model (GMM-UBM) [18]. It factorizes the speech signal into the phonetic factor and the speaker factor, and this factorization process is based on the maximum likelihood (ML) criterion. This basic factorization model was later extended to various low-rank variants, including the joint factor analysis model [8] and the i-vector model [3]. Further improvements were obtained by either discriminative models (e.g., PLDA [7]) or phonetic knowledge transferring (e.g., DNN-based i-vector model [9, 13]).

Recently, inspired by the success of deep learning in automatic speech recognition (ASR), the neural-based ASV models have been studied and shown great potential 

[23, 6, 14]

. These models leverage the power of deep neural networks (DNNs) in learning strong speaker-related discriminative features, ideally from a large amount of speaker-labelled data. A state-of-the-art neural-based architecture is the x-vector model proposed by Snyder et al. 

[21]

. By this architecture, frame-level deep features are derived by several full-connection layers (or more structured layers), and then the first- and second-order statistics of frame-level features are collected and then projected to a low-dimensional representation, which is called ‘x-vector’. During the training, the objective of discriminating the speakers in the training dataset encourages the DNN structure to learn discriminative representations at both the frame level (deep feature) and the utterance level (x-vector). The x-vector model has achieved the state-of-the-art performance in various speaker recognition tasks, as well as related tasks such as language identification 

[20].

In spite of its powerful discriminability, the x-vector model still heavily relies on a strong back-end scoring component, such as LDA, PLDA [2, 24]. This is puzzling at the first glance as in the i-vector regime the back-end models play the role of enhancing the discrimination among speaker, though the x-vectors have been discriminative already. Our previous study shows that the back-end models play a different role when accompanying x-vectors: instead of promoting discrimination, they essentially normalize the prior distribution of speaker x-vectors and the conditional distribution of utterance x-vectors of a particular speaker [24].

A critical problem that usually arises in real-life applications is that the back-end models are highly domain-sensitive, which means that an LDA-PLDA model that is well trained in one domain may degrade significantly in other domains whose acoustic condition is substantially different from that of the training data. To tackle this problem, this paper presents a domain adaptation approach based on the Variational Auto-Encoder (VAE). VAE is a powerful architecture that can project an unconstrained distribution to a simple Gaussian distribution, and the projection can be learned in a purely unsupervised way. In our previous study 

[24], VAE has been used as a normalization model that normalizes the distribution of x-vectors into a more regularized Gaussian. This normalization, when combined with PLDA, clearly improves the ASV performance. In this study, we investigate a domain adaptation approach based on the VAE-based normalization architecture. Our experiments showed that this VAE-based adaptation outperforms both the LDA- and PCA-based adaptation and the famous unsupervised PLDA adaptation [5, 4].

The organization of this paper is as follows. Section 2 describes the related work, and Section 3 presents the proposed VAE-based adaption approach. Experiments are reported in Section 4, and the paper is concluded in Section 5.

Figure 1: The three-component architecture of an x-vector system, where the normalization model is a VAE. X-vectors are extracted from the speaker-discriminative network, and then pass the VAE network for normalization. The normalized x-vectors are retrieved from the bottleneck layer of the VAE and scored by PLDA. The adaptation can be conducted on either VAE or PLDA, or both.

2 Related work

This work is a direct extension of our previous work [24]. The main contribution of this extension is that we provide a thorough investigation on the VAE-based domain adaption for ASV. Some recent studies on domain adaptation in the x-vector model regime are related to this work. For example, Alam et al. [1] presented an unsupervised adaptation approach based on Correlation Alignment (CORAL) [22]. CORAL can align the distributions of in-domain and out-of-domain features. Alam et al. found that this technique can be applied to compensate for domain mismatch of x-vectors. Lee et al.  [12] proposed a similar approach that employed CORAL to align the statistics of in-domain and out-of-domain vectors. The OOD statistics was then used to update the PLDA model. Our VAE-based approach works on the normalization model rather than the scoring model.

3 VAE-based domain adaptation

3.1 Revisit VAE

VAE is a generative model that can represent a complex data distribution [10]. The key idea of VAE is to learn a DNN-based mapping function that maps a simple distribution to a complex distribution . In other words, it represents complex observations by simple-distributed latent codes via a complex mapping function.

In brief, VAE consists of two parts, a decoder that maps to , i.e.,

where has been assumed to be Gaussian. And an encoder produces a distribution that approximates the posterior distribution as follows:

where .

The training objective is the log probability of the training data

. It is intractable so a variational lower bound is optimized instead, which depends on both the encoder and the decoder . This is formally written by:

where is the KL divergence, and denotes expectation w.r.t. distribution . As the expectation is intractable, a sampling scheme is often used, as shown in the blue box in Fig. 1. More details of the training process can be found in [10].

3.2 VAE for normalization and adaptation

A conventional x-vector system consists of two components, one is the front-end model which is used to extract speaker embeddings (x-vectors), the other one is the back-end model which is used for scoring. For the front-end model, it is trained by discriminating the speakers in the training set, as shown in the dotted gray box in Fig. 1. To learn sufficiently discriminative and generalizable speaker embeddings, it requires a large amount of speaker-labelled data. In spite of its powerful discriminability, the x-vector model still heavily relies on a PLDA back-end.

A potential problem, however, is that these back-end models may be domain-sensitive. For instance, a well-trained PLDA tends to be ineffective on the out-of-domain (OOD) data. To deal with this OOD issue, an intuitive idea is to retrain an OOD PLDA model. However, training a PLDA model from scratch requires a large amount of labelled data, usually thousands of speakers, each with multiple sessions. In many practical situations, collecting such a large amount of labelled data is very difficult and time consuming. Therefore, how to make full use of the limited speaker-labelled data is the key point to deal with the OOD issue. To tackle this problem, a multitude of PLDA adaptation approaches have been proposed [5, 4]. In all these methods, within-class and between-class statistics are collected from the adaptation data, and then are used to update the PLDA model.

In a previous study [24], we presented a three-component architecture, where a normalization model is introduced between the front-end (x-vector DNN) and the back-end (PLDA). The role of the normalization model is to project x-vectors to a latent space in which the projected codes are more regularized, e.g., more Gaussian. This model could be PCA or LDA, but we found VAE is more powerful, due to its capability of representing complex distributions with a simple distribution. This three-component architecture is shown in Fig. 1.

This architecture motivates a new domain-adaptation approach, i.e., adapting the normalization model rather than the PLDA back-end. In particular, there are several advantages if we adapt the VAE-based normalization model: (1) VAE is essentially a distribution mapping function that involves strong structural constraints (i.e., conditional Gaussian) in both the data space and latent space. This highly structured architecture allows effective adaptation even with a very limited amount of data; (2) VAE training is purely unsupervised and the adaptation data are easy to obtain; (3) After VAE adaptation, the normalized x-vectors (latent codes) remain regularized although the distribution of raw x-vectors may have greatly changed. This largely alleviates the necessity of PLDA adaptation. Fig. 1 illustrates the VAE-based adaptation, where the parameters of both VAE and PLDA could be adapted using the OOD data.

4 Experiments

4.1 Data

Three datasets were used in our experiments: VoxCeleb, SITW and CSLT-SITW. VoxCeleb was used for model training, while the other two were used for evaluation. More information about these three datasets is presented below.

VoxCeleb: A large-scale free speaker database collected by University of Oxford, UK [16]. Data augmentation was applied, where the MUSAN corpus [19] was used to generate noisy utterances and the room impulse responses (RIRS) corpus [11] was used to generate reverberant utterances. This dataset, after removing the utterances shared by SITW, was used to train the DNN x-vector model, plus the PLDA and VAE models.

SITW-Eval.Core: A standard free database collected by  [15] for ASV evaluation. It was collected from open-source media channels, and consists of speech data covering well-known persons. This dataset was used as the IND test set.

CSLT-SITW: A small database collected by CSLT for commercial usage. It consists of speakers, each of which records a simple Chinese command word, and the duration is about seconds. The scenarios involve laboratory, corridor, street, restaurant, bus, subway, mall, home, etc. Speakers varied their recording devices and poses during the recording. In our experiments, speakers were used for OOD adaptation (OOD adaptation set), and the rest speakers were used for OOD evaluation (OOD test set).

4.2 Settings

We built several systems to validate the VAE-based domain adaptation. All these systems use the same x-vector front-end and PLDA back-end, but differ in the normalization model. We denote these systems as follows.

Baseline: The baseline x-vector system. It was built following the Kaldi SITW recipe [17]. The feature-learning component is a

-layer time-delay neural network (TDNN). The statistic pooling layer computes the mean and standard deviation of the frame-level features from a speech segment. The size of the output layer is

, corresponding to the number of speakers in the training set. Once trained, the -dimensional activations of the penultimate hidden layer are read out as an x-vector. There is no normalization model.

PCA: As the baseline, but with PCA as the normalization model. The dimension of the code space is . Similar to VAE, PCA is also an unsupervised model, though it is linear and shallow.

LDA: As the baseline, but with LDA as the normalization model. The dimension of the code space is .

VAE: As the baseline, but with VAE as the normalization model. The VAE model is a -layer DNN. The dimension of code space is , and other hidden layers are .

C-VAE: As the baseline, but with C-VAE as the normalization model. C-VAE is a variant of VAE, with a cohesive loss involved to encourage within-class coherence [24].

4.3 Basic results

We first present the basic results evaluated on the IND test set and the OOD test set. All these three components of the system (front-end, normalization, back-end) are trained with VoxCeleb. The results in terms of equal error rate (EER) are reported in Table 1. As expected, it can be observed that for all these five systems, the performance on the IND data outperforms that on the OOD data. For the baseline system (without any normalization), the performance degradation on the OOD data is not much, suggesting that the DNN x-vector model has been well trained and fairly generalizable. For systems with normalization models (PCA, LDA, VAE and C-VAE), the performance on the IND data is significantly improved, which confirms the contribution of normalization. However, the performance on the OOD data nearly remains unchanged, indicating that all these normalization models suffer from a domain mismatch. In particular, the two VAE systems drop the most on the OOD data, though their performance on the IND data is the best. This is not surprising, as VAE/C-VAE are the most complex models and so tend to be domain-overfitting.

Baseline PCA LDA VAE C-VAE
IND 16.79 4.84 3.80 3.64 3.77
OOD 18.51 14.58 14.82 16.72 15.58
Table 1: EER(%) results of various systems on the IND data and the OOD data.

4.4 PLDA adaptation

In this experiment, we keep all these settings as in Section 4.3, but adapt the PLDA model using the OOD adaptation data. This back-end adaptation will partly mitigate the domain-mismatch problem and so presumably improves performance of all these systems. We investigated two adaptation schemes: PLDA-RET that retrains the PLDA model from scratch, and PLDA-UAT that adapts the PLDA model using the unsupervised adaptation approach proposed by [4]. The results are presented in Table 2.

Baseline PCA LDA VAE C-VAE
PLDA 18.51 14.58 14.82 16.72 15.58
PLDA-RET 15.25 13.83 14.18 13.85 13.47
PLDA-UAT 14.49 12.82 13.40 15.02 13.88
Table 2: EER(%) result on the OOD set with PLDA adaptation.

Firstly, we observe that both the two PLDA adaptation approaches improve the performance on all these five systems, as expected. Secondly, the best performance is obtained by the PCA system with PLDA-UAT. The VAE and C-VAE systems do not work as well as the PCA and LDA systems. This indicates that PLDA adaptation can not fully compensate for the domain-mismatch inherence in the VAE/C-VAE models.

4.5 Adaptation for normalization

We have found that the normalization models, in particular VAE and C-VAE, suffer from domain-mismatch on the OOD data, for which PLDA adaptation can not fully address. In this experiment, we adapt both the normalization model and the PLDA back-end. For simplicity, both these adaptations are implemented as re-training. The results are shown in Table 3, where ‘Norm-Adapt’ denotes the normalization model adaptation.

PCA LDA VAE C-VAE
PLDA-RET 13.83 14.18 13.85 13.47
Norm-Adapt + PLDA-RET 13.31 14.84 12.79 12.73

Table 3: EER(%) result on the OOD set with adaptation on both normalization models and PLDA back-end.

Firstly, it can be observed that the adaptation on normalization models delivers performance gains on all these systems, compared with the sole PLDA adaptation (PLDA-RET). As expected, the improvement on the VAE and C-VAE systems is much more significant than that on the PCA and LDA systems, indicating that adaptation is more important for complex normalization models. Overall, the C-VAE system obtains the best performance with both normalization model adaptation and PLDA adaptation. This performance is better than the best unsupervised PLDA adaptation shown in Table 2.

4.6 Analysis

To better understand these adaption methods, we compute the skewness and kurtosis of the distributions of normalized x-vectors of utterances in the OOD test dataset. The skewness and kurtosis are defined as follows:

where and denote the mean and standard variation of , respectively. The more Gaussian a distribution is, the closer to zero the two values are.

The utterance-level skewness and kurtosis of x-vectors normalized by different normalization models are reported in Table 4. The Original group denotes the normalized vectors produced by the original normalization models trained with VoxCeleb, and the Adaptation group denotes the normalized vectors produced by the adapted normalization models.

Skew Kurt
Original Baseline -0.0890 -0.1154
PCA 0.0004 0.0713
LDA 0.0050 0.1257
VAE 0.0096 0.0560
C-VAE -0.0132 -0.0027
Adaptation PCA -0.0076 0.1447
LDA 0.0054 0.3465
VAE -0.0010 -0.0115
C-VAE -0.0023 0.0011
Table 4: Utterance-level Skewness and Kurtosis of x-vectors normalized by different normalization models.

In the original group, the skewness and kurtosis values of the utterance-level x-vectors are clearly reduced by any of the normalization models, confirming that PCA, LDA, VAE and C-VAE are capable of normalizing x-vectors. Moreover, it can be found that the skewness and kurtosis of the PCA and LDA normalized vectors are smaller than VAE and C-VAE normalized vectors. This indicates that in the OOD scenario, PCA and LDA can do better than VAE and C-VAE in vector normalization. This is consistent with the observations in Table 2, where the PCA and LDA systems perform better than the VAE and C-VAE systems on the OOD data.

After adaptation, the skewness and kurtosis of VAE and C-VAE normalized vectors are clearly reduced. This is understandable as these two models are the most powerful in distribution normalization. This normalization does not work well on the OOD data, but a simple adaptation will recover the power quickly. Again, these results are consistent with the observations shown in Table 3, where VAE and C-VAE show the best performance after adaptation.

5 Conclusions

This paper proposed a VAE-based domain adaptation approach for deep speaker embedding. VAE (and its variant C-VAE) is a powerful model for normalizing the distribution of x-vectors, and can be easily adapted to a new domain with a small amount of data. Experiments demonstrated that this VAE-based adaptation outperforms the LDA- and PCA-based adaptation, and when combined with PLDA re-training, it outperforms the unsupervised PLDA adaptation.

Acknowledgement

This work was supported by the National Natural Science Foundation of China No. 61633013, and the Postdoctoral Science Foundation of China No. 2018M640133.

References

  • [1] J. Alam, G. Bhattacharya, and P. Kenny (2018) Speaker verification in mismatched conditions with frustratingly easy domain adaptation. In Proc. Odyssey 2018 The Speaker and Language Recognition Workshop, pp. 176–180. Cited by: §2.
  • [2] W. Cai, J. Chen, and M. Li (2018)

    Exploring the encoding layer and loss function in end-to-end speaker and language recognition system

    .
    In Proc. Odyssey 2018 The Speaker and Language Recognition Workshop, pp. 74–81. External Links: Document, Link Cited by: §1.
  • [3] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet (2011) Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing 19 (4), pp. 788–798. Cited by: §1.
  • [4] D. Garcia-Romero, A. McCree, S. Shum, N. Brummer, and C. Vaquero (2014) Unsupervised domain adaptation for i-vector speaker recognition. In Proceedings of Odyssey: The Speaker and Language Recognition Workshop, Cited by: §1, §3.2, §4.4.
  • [5] D. Garcia-Romero, X. Zhang, A. McCree, and D. Povey (2014) Improving speaker recognition performance in the domain adaptation challenge using deep neural networks. In 2014 IEEE Spoken Language Technology Workshop (SLT), pp. 378–383. Cited by: §1, §3.2.
  • [6] G. Heigold, I. Moreno, S. Bengio, and N. Shazeer (2016) End-to-end text-dependent speaker verification. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5115–5119. Cited by: §1.
  • [7] S. Ioffe (2006) Probabilistic linear discriminant analysis. pp. 531–542. Cited by: §1.
  • [8] P. Kenny, G. Boulianne, P. Ouellet, and P. Dumouchel (2007) Joint factor analysis versus eigenchannels in speaker recognition. IEEE Transactions on Audio, Speech, and Language Processing 15 (4), pp. 1435–1447. Cited by: §1.
  • [9] P. Kenny, V. Gupta, T. Stafylakis, P. Ouellet, and J. Alam (2014) Deep neural networks for extracting baum-welch statistics for speaker recognition. In Proc. Odyssey, pp. 293–298. Cited by: §1.
  • [10] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §3.1, §3.1.
  • [11] T. Ko, V. Peddinti, D. Povey, M. L. Seltzer, and S. Khudanpur (2017) A study on data augmentation of reverberant speech for robust speech recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5220–5224. Cited by: §4.1.
  • [12] K. A. Lee, Q. Wang, and T. Koshinaka (2019) The coral+ algorithm for unsupervised domain adaptation of plda. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5821–5825. Cited by: §2.
  • [13] Y. Lei, N. Scheffer, L. Ferrer, and M. McLaren (2014) A novel scheme for speaker recognition using a phonetically-aware deep neural network. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 1695–1699. Cited by: §1.
  • [14] L. Li, Y. Chen, Y. Shi, Z. Tang, and D. Wang (2017) Deep speaker feature learning for text-independent speaker verification. In Interspeech, pp. 1542–1546. Cited by: §1.
  • [15] M. McLaren, L. Ferrer, D. Castan, and A. Lawson (2016) The speakers in the wild (SITW) speaker recognition database.. In Interspeech, pp. 818–822. Cited by: §4.1.
  • [16] A. Nagrani, J. S. Chung, and A. Zisserman (2017) Voxceleb: a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612. Cited by: §4.1.
  • [17] D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, et al. (2011) The kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, Cited by: §4.2.
  • [18] D. A. Reynolds, T. F. Quatieri, and R. B. Dunn (2000) Speaker verification using adapted Gaussian mixture models. Digital signal processing 10 (1-3), pp. 19–41. Cited by: §1.
  • [19] D. Snyder, G. Chen, and D. Povey (2015) MUSAN: A Music, Speech, and Noise Corpus. Cited by: §4.1.
  • [20] D. Snyder, D. Garcia-Romero, A. McCree, G. Sell, D. Povey, and S. Khudanpur (2018) Spoken language recognition using x-vectors. In Proc. Odyssey 2018 The Speaker and Language Recognition Workshop, pp. 105–111. External Links: Document, Link Cited by: §1.
  • [21] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur (2018) X-vectors: robust dnn embeddings for speaker recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5329–5333. Cited by: §1.
  • [22] B. Sun, J. Feng, and K. Saenko (2016) Return of frustratingly easy domain adaptation. In

    Thirtieth AAAI Conference on Artificial Intelligence

    ,
    Cited by: §2.
  • [23] E. Variani, X. Lei, E. McDermott, I. L. Moreno, and J. Gonzalez-Dominguez (2014) Deep neural networks for small footprint text-dependent speaker verification. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4052–4056. Cited by: §1.
  • [24] Y. Zhang, L. Li, and D. Wang (2019) VAE-based regularization for deep speaker embedding. arXiv preprint arXiv:1904.03617. Cited by: §1, §1, §2, §3.2, §4.2.