Within-sample variability-invariant loss for robust speaker recognition under noisy environments

02/03/2020
by   Danwei Cai, et al.
Duke University
0

Despite the significant improvements in speaker recognition enabled by deep neural networks, unsatisfactory performance persists under noisy environments. In this paper, we train the speaker embedding network to learn the "clean" embedding of the noisy utterance. Specifically, the network is trained with the original speaker identification loss with an auxiliary within-sample variability-invariant loss. This auxiliary variability-invariant loss is used to learn the same embedding among the clean utterance and its noisy copies and prevents the network from encoding the undesired noises or variabilities into the speaker representation. Furthermore, we investigate the data preparation strategy for generating clean and noisy utterance pairs on-the-fly. The strategy generates different noisy copies for the same clean utterance at each training step, helping the speaker embedding network generalize better under noisy environments. Experiments on VoxCeleb1 indicate that the proposed training framework improves the performance of the speaker verification system in both clean and noisy conditions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/03/2021

PL-EESR: Perceptual Loss Based END-TO-END Robust Speaker Representation Extraction

Speech enhancement aims to improve the perceptual quality of the speech ...
11/23/2018

Training Multi-Task Adversarial Network For Extracting Noise-Robust Speaker Embedding

Under noisy environments, to achieve the robust performance of speaker r...
05/25/2021

Utterance partitioning for speaker recognition: an experimental review and analysis with new findings under GMM-SVM framework

The performance of speaker recognition system is highly dependent on the...
07/23/2020

Augmentation adversarial training for unsupervised speaker recognition

The goal of this work is to train robust speaker recognition models with...
03/29/2022

DRSpeech: Degradation-Robust Text-to-Speech Synthesis with Frame-Level and Utterance-Level Acoustic Representation Learning

Most text-to-speech (TTS) methods use high-quality speech corpora record...
04/05/2022

Design Guidelines for Inclusive Speaker Verification Evaluation Datasets

Speaker verification (SV) provides billions of voice-enabled devices wit...
07/16/2019

Towards Adapting NMF Dictionaries Using Total Variability Modeling for Noise-Robust Acoustic Features

We propose an algorithm to extract noise-robust acoustic features from n...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Automatic speaker verification (ASV) refers to automatically making the decision to accept or reject a claimed speaker by analyzing the given speech from that speaker. In the past few years, the performance of ASV systems has been improved significantly with the successful application of deep neural network (DNN) to speaker embedding modeling [1, 2]. However, unsatisfactory performance persists under noisy environments, which commonly noticed in smartphones or smart speakers with ASV applications. The additive noises on a clean speech contaminate the low energy regions of the spectrogram and blur the acoustic details [3]. These noises result in the loss of speech intelligibility and quality, imposing great challenges on speaker recognition systems.

To compensate for these adverse impacts, various approaches have been proposed at different stages of the ASV systems. At the signal level, DNN based speech or feature enhancement [4, 5, 6, 7] has been investigated for ASV under complex environment. At the feature level, feature normalization techniques [8] and noise-robust features such as power-normalized cepstral coefficients (PNCC) [9] have also been applied to ASV systems. At the model level, robust back-end modeling methods such as multi-condition training of probabilistic linear discriminant analysis (PLDA) models [10] and mixture of PLDA [11]

were employed in the i-vector

[12] framework. Also, score normalization [13] could be used to improve the robustness of the ASV system under noisy scenarios.

More recently, researchers are working on training deep speaker networks to cope with the distortions caused by noise. Within this framework, there are two main methods. The first one regards the noisy data as a different domain from the clean data and applies adversarial training to deal with domain mismatch and get a noise-invariant speaker embedding [14, 15]. The second method employs a DNN speech enhancement network for ASV tasks. Shon et al. [16] train the speech enhancement network with feedbacks from the speaker network to find the time-frequency bins that are beneficial to ASV tasks with noisy speech. Zhao et al. [17] uses the intermediate result of the speech enhancement network as an auxiliary input for the speaker embedding network and jointly optimize these two networks.

In this work, our network learns enhancement directly at the embedding level for speaker recognition under noisy environments. We train the deep speaker embedding network by incorporating the original speaker identification loss with an auxiliary within-sample loss. The speaker identification loss learns the speaker representation using the speaker label, while the within-sample loss aims to learn the embedding of noisy utterance as similar as possible to its clean version. In this way, the deep speaker embedding network is trained to prevent from encoding the additive noises into the speaker representation and learn the “clean” embedding for the noisy speech utterance. The loss that helps the speaker network to learn variability-invariant embedding is called within-sample variability-invariant loss.

Furthermore, to fully explore the modeling ability of the within-sample variability-invariant loss, we dynamically generate the clean and noisy utterance pairs when preparing data for the training process. Different noisy copies for the same clean utterance are generated at different training steps, helping the speaker embedding network generalize better under noisy environments.

2 Revisit: Deep speaker embedding

In this section, we describe the deep speaker embedding framework, which consists of a frame-level local pattern extractor, an utterance-level encoding layer, and several fully-connected layers for speaker embedding extraction and speaker classification.

Given a variable-length input feature sequence, the local pattern extractor, which is typically a convolutional neural network (CNN) 

[2] or a time-delayed neural network (TDNN) [1]

, learns the frame-level representations. An encoding layer is then applied to the top of it to get the utterance level representation. The most common encoding method is the average pooling layer, which aggregates the statistics (i.e., mean, or mean and standard deviation

[1, 2]. Self-attentive pooling layer [18], learnable dictionary encoding layer [19], and dictionary-based NetVLAD layer [20, 21]

are other commonly used encoding layers. Once the utterance-level representation is extracted, a fully connected layer and a speaker classifier are employed to further abstract the speaker representation and classify the training speakers. After training, deep speaker embedding is extracted after the penultimate layer of the network for the given variable-length utterance.

In this work, the local pattern extractor is a residual convolutional neural network (ResNet) [22], and the encoding layer is a global statistics pooling (GSP) layer. For the frame-level representation , the output of GSP is a utterance-level representation , where and are the mean and standard deviation of the feature map:

(1)

and denote the number of channels, height and width of the feature map respectively.

3 Methods

In this section, we describe the proposed framework with within-sample variability-invariant loss and online noisy data generation.

3.1 Within sample variability-invariant loss

A clean speech and its noisy copies contain the same acoustic contents for recognizing speakers. Ideally, the speaker embeddings of the noisy utterance should be the same as its clean version. But in reality, the deep speaker embedding network usually encodes the noises as parts of the speaker representation for the noisy speech.

In this work, we train the local pattern extractor to learn the enhancement at the embedding level. Formally, for a clean utterance and its noisy copy with noise , the speaker embeddings extracted by the network are

(2)

A loss function

on the embedding level is used to measure the difference between the noisy embedding and the clean embedding form the same sample. The learning objective for the speaker network is

(3)

In this way, the speaker embedding network is trained to ignore the additive noises and learn noise-invariant embeddings. We refer this loss function as within-sample variability-invariant loss. Two different loss functions are investigated in this work, i.e., mean square error (MSE) regression loss and cosine embedding loss.

The MSE regression loss calculates the mean of the square L2 norm between the clean embedding and its noisy version ,

(4)

where denotes the L2 norm, is the dimension of the speaker embeddings .

The cosine embedding loss calculates the cosine distance between the clean embedding and its noisy version ,

(5)

The within-sample variability-invariant loss works with the original speaker identification loss together to train the speaker embedding network. The speaker identification loss is typically a cross-entropy. In our implementation, the hyper-parameters of the network are updated twice at each training step. The first update from the speaker identification loss is followed by the second update from the within-sample variability-invariant loss. Figure 1 shows the flowchart of our proposed framework.

Figure 1: Training deep speaker embedding network with within-sample variability-invariant loss.
  
Noise
Type
SNR Clean Offline AUG Online AUG Online AUG Online AUG Online AUG Online AUG Online AUG
softmax softmax softmax s.+MSE s.+cosine A-softmax As.+MSE As.+cosine
Original set 0.453 3.73 0.451 3.65 0.516 3.66 0.418 3.46 0.459 3.47 0.456 3.56 0.442 3.49 0.435 3.12
  Babble 0 0.974 24.16 0.900 13.29 0.877 12.32 0.822 11.10 0.821 11.21 0.861 12.57 0.844 10.93 0.848 11.78
5 0.881 12.25 0.749 6.96 0.688 6.63 0.683 5.94 0.709 5.99 0.647 6.56 0.662 5.83 0.619 5.97
10 0.682 6.91 0.588 5.23 0.577 4.87 0.535 4.57 0.548 4.68 0.519 4.86 0.610 4.38 0.557 4.44
15 0.596 4.94 0.506 4.46 0.538 4.27 0.508 3.94 0.479 4.13 0.476 4.15 0.509 3.89 0.480 3.73
20 0.493 4.07 0.483 4.05 0.513 3.76 0.440 3.61 0.484 3.75 0.467 3.77 0.478 3.66 0.453 3.36
  Music 0 0.921 16.02 0.758 9.01 0.728 8.44 0.710 7.65 0.742 7.74 0.784 8.66 0.725 7.27 0.722 7.79
5 0.838 9.81 0.665 6.02 0.678 5.92 0.608 5.47 0.582 5.29 0.628 5.88 0.594 5.36 0.626 5.23
10 0.691 6.31 0.560 4.90 0.577 4.67 0.572 4.30 0.542 4.51 0.510 4.56 0.507 4.25 0.490 4.11
15 0.547 4.82 0.508 4.29 0.519 4.15 0.458 3.90 0.476 3.94 0.484 4.05 0.479 3.82 0.456 3.63
20 0.535 4.19 0.491 3.91 0.507 3.84 0.451 3.71 0.483 3.66 0.470 3.74 0.448 3.65 0.437 3.30
  Noise 0 0.968 15.20 0.781 8.61 0.757 8.09 0.715 7.25 0.708 7.31 0.696 8.00 0.724 7.31 0.742 7.34
5 0.823 9.81 0.675 6.43 0.688 6.03 0.629 5.56 0.637 5.62 0.657 6.09 0.615 5.64 0.640 5.65
10 0.724 7.15 0.598 5.07 0.602 4.92 0.557 4.52 0.570 4.50 0.563 4.85 0.574 4.59 0.553 4.35
15 0.611 5.54 0.556 4.50 0.579 4.38 0.492 4.11 0.521 4.14 0.519 4.30 0.528 4.03 0.503 3.85
20 0.540 4.57 0.500 4.07 0.547 3.97 0.476 3.83 0.501 3.79 0.467 3.85 0.470 3.72 0.452 3.44
   All noises 0.798 9.40 0.644 6.33 0.650 6.00 0.602 5.51 0.614 5.56 0.607 6.01 0.607 5.40 0.596 5.45
Table 1: Performance on Voxceleb1 test set (DCF and EER[%]), s. denotes softmax, As. denotes A-softmax. The bold highlight the best DCF and EER for the speaker networks trained with softmax and A-softmax.

3.2 Online data augmentation

In this work, we implement an online data augmentation strategy. Different parameters of noise types, noise clips and signal-to-noise ratio (SNR) are randomly selected to generate the clean-noisy utterance pair when training. Different permutations of these random parameters generate different noisy segments for the same utterance at different training steps, so the network never “sees” the same noisy segment from the same clean speech.

During training, the SNR is a continuous random variable uniformly distributed between 0 and 20dB, and there are four types of noise: music, ambient noise, television, and babble. The television noise is generated with one music file and one speech file. The babble noise is constructed by mixing three to six speech files into one, which results in overlapping voices simultaneously with the foreground speech.

Layer Output Size Structure
Conv1
Residual
Layer 1
Residual
Layer 2
Residual
Layer 3
Residual
Layer 4
Encoding Global Statistics Pooling
Embedding Fully Connected Layer
Classifier Fully Connected Layer
Table 2: The network architecture,

(kernal size, stride) denotes the convolutional layer,

(kernal size, stride) denotes the shortcut convolutional layer, denotes the residual block.

4 Experiments

4.1 Dataset

The experiments are conducted on Voxceleb 1 dataset [23]. The training data contain 148642 utterances from 1211 speakers. In the test data, 4874 utterances from 40 speakers construct 37720 test trials. Although the Voxceleb dataset collected from online video is not strictly in clean condition, we assume the original data as a clean dataset and generate noisy data from the original data.

The MUSAN dataset [24] is used as the noise source. We split the MUSAN into two non-overlapping subsets for training and testing noisy data generation respectively.

4.2 Experimental setup

Speech signals are firstly converted to 64-dimensional log Mel-filterbank energies and then fed into the speaker embedding network. The detailed network architecture is in table 2. The front-end local pattern extractor is based on the well known ResNet-34 architecture [22]

. ReLU activation and batch normalization are applied to each convolutional layer.

For the speaker identification loss, a standard softmax-based cross-entropy loss or angular softmax loss (A-softmax) [25] is used. When training with softmax loss, dropout is added to the penultimate fully-connected layer to prevent overfitting.

Three training data settings are investigated: (1) original Voxceleb 1 dataset (clean); (2) original training dataset and offline generated noisy data, i.e., the noisy data are generated in advance (offline AUG); (3) original training data with online data augmentation (online AUG).

At the testing stage, cosine similarity is used for scoring. We use equal error rate (EER) and detection cost function (DCF) as the performance metric. The reported DCF is the average of two minimum DCFs when

is 0.01 and 0.001.

4.3 Experimental results

Eight deep speaker embedding networks are trained based on three training conditions and different loss functions. Table 1 shows the DCF and EER of three noise types (babble, ambient noise and music) at five SNR settings (0, 5, 10, 15, 20dB). Also, all of the 15 noisy testing trials are combined to form the “all noises” trial.

Figure 2: DET curves for four deep speaker embedding systems.
Figure 3: Three training loss curves for the network trained with speaker softmax loss and within-sample MSE loss. The referenced within-sample MSE loss between the clean and noisy data of the converged network trained with only softmax loss is also given.
Figure 2: DET curves for four deep speaker embedding systems.
Figure 4: t-SNE visualization of speaker embeddings extracted from the training dataset. Each marker corresponds to a different speaker, and each color in the same marker corresponds to a different utterance. The clean utterances and its noisy copies have the same color.

Several observations from the results are discussed in the following. 1) The experimental results confirm that data augmentation strategy can greatly improve the performance of the deep speaker embedding system under noisy conditions. 2) Comparing with the offline data augmentation strategy, the performance improvement achieved by online data augmentation is more obvious in the low SNR conditions. 3) Training the deep speaker embedding system with within-sample variability-invariant loss can improve the system performance in the clean and all noisy conditions. 4) Comparing with the network trained with offline data augmentation, the proposed framework using within-sample variability-invariant loss with online data augmentation achieves 13.0% and 6.5% reduction in terms of EER and DCF respectively. 5) When the speaker embedding network is trained discriminatively using the A-softmax loss with angular margin, the proposed within-class loss can still improve the system performance by setting constraints on the distance among the clean utterance and its noisy copies.

The detection error tradeoff (DET) curves in figure 3 provide comparisons among four selected systems, two of which are trained with our proposed framework. The DET curve uses testing trials from all the noisy conditions.

We also visualized the speaker embeddings by using the t-distributed stochastic neighbor embedding (t-SNE) algorithm [26]. The two-dimensional results of the speaker embeddings are shown in figure 4. Four speakers, each with six clean utterances, are selected from the training dataset for visualization. Also, each clean utterance has three 5dB noisy copies of music, babble and ambient noises. Comparing with the clean training condition, data augmentation helps the clean and noisy embeddings from the same utterance cluster together. Further, after training the deep speaker embedding network with within-noise variability-invariant loss, the clean and noisy embeddings of the same utterance are closer to each other.

The loss values of each training epoch are shown in figure

3 for the network with speaker softmax and within-sample MSE losses. The referenced MSE loss between embeddings from the clean and noisy data of the converged network trained with only softmax loss is also given. We can observe that the MSE loss is maintained at a low level during training, which helps the network to extract noisy embedding similar to its clean version.

5 Conclusion

This paper has proposed the within-sample variability-invariant loss for deep speaker embedding networks under noisy conditions. By setting constraints on the embeddings extracted from the clean utterance and its noisy copies, the proposed loss works with the original speaker identification loss to learn robust embedding for noisy speeches. We also employ the data preparation strategy of generating the clean and noisy utterance pairs on-the-fly to help the speaker embedding network generalize better under noisy environments. The proposed framework is flexible and can be extended to other similar applications when multiple views of the same training speech sample are available.

6 Acknowledgement

This research is funded in part by the National Natural Science Foundation of China (61773413) and Duke Kunshan University.

References

  • [1] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur, “x-vectors: Robust DNN Embeddings for Speaker Recognition,” in ICASSP, 2018, pp. 5329–5333.
  • [2] W. Cai, J. Chen, and M. Li, “Exploring the Encoding Layer and Loss Function in End-to-End Speaker and Language Recognition System,” in Speaker Odyssey, 2018, pp. 74–81.
  • [3] M. Wolfel and J. McDonough, Distant Speech Recognition, John Wiley & Sons, Incorporated, 2009.
  • [4] X. Zhao, Y. Wang, and D. Wang, “Robust Speaker Identification in Noisy and Reverberant Conditions,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 4, pp. 836–845, 2014.
  • [5] M. Kolboek, Z. Tan, and J. Jensen,

    “Speech Enhancement Using Long Short-Term Memory based Recurrent Neural Networks for Noise Robust Speaker Verification,”

    in SLT, 2016, pp. 305–311.
  • [6] Z. Oo, Y. Kawakami, L. Wang, S. Nakagawa, X. Xiao, and M. Iwahashi, “DNN-Based Amplitude and Phase Feature Enhancement for Noise Robust Speaker Identification,” in Interspeech, 2016, pp. 2204–2208.
  • [7] O. Plchot, L. Burget, H. Aronowitz, and P. Matejka,

    “Audio enhancing with DNN autoencoder for speaker recognition,”

    in ICASSP, 2016, pp. 5090–5094.
  • [8] J. Pelecanos and S. Sridharan, “Feature Warping for Robust Speaker Verification,” in Speaker Odyssey, 2001, pp. 213–218.
  • [9] C. Kim and R. M Stern, “Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition,” IEEE/ACM Transactions on Audio, Speech and Language Processing, vol. 24, no. 7, pp. 1315–1329, 2016.
  • [10] D. Garcia-Romero, X. Zhou, and C. Y. Espy-Wilson, “Multi-Condition Training of Gaussian PLDA Models in i-vector Space for Noise and Reverberation Robust Speaker Recognition,” in ICASSP, 2012, pp. 4257–4260.
  • [11] M. Mak, X. Pang, and J. Chien, “Mixture of PLDA for Noise Robust i-Vector Speaker Verification,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 1, pp. 130–142, 2016.
  • [12] N. Dehak, P. J. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-End Factor Analysis for Speaker Verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
  • [13] I. Peer, B. Rafaely, and Y. Zigel, “Reverberation Matching for Speaker Recognition,” in ICASSP, 2008, pp. 4829–4832.
  • [14] J. Zhou, T. Jiang, L. Li, Q. Hong, Z. Wang, and B. Xia, “Training Multi-Task Adversarial Network for Extracting Noise-Robust Speaker Embedding,” in ICASSP, 2019, pp. 6196–6200.
  • [15] Z. Meng, Y. Zhao, J. Li, and Y. Gong, “Adversarial Speaker Verification,” in ICASSP, 2019, pp. 6216–6220.
  • [16] S. Shon, H. Tang, and J. Glass, “VoiceID Loss: Speech Enhancement for Speaker Verification,” in Interspeech, 2019, pp. 2888–2892.
  • [17] F. Zhao, H. Li, and X. Zhang, “A Robust Text-independent Speaker Verification Method Based on Speech Separation and Deep Speaker,” in ICASSP, 2019, pp. 6101–6105.
  • [18] G. Bhattacharya, J. Alam, and P. Kenny, “Deep Speaker Embeddings for Short-Duration Speaker Verification,” in Interspeech, 2017, pp. 1517–1521.
  • [19] W. Cai, Z. Cai, X. Zhang, X. Wang, and M. Li, “A Novel Learnable Dictionary Encoding Layer for End-to-End Language Identification,” in ICASSP, 2018, pp. 5189–5193.
  • [20] J. Chen, W. Cai, D. Cai, Z. Cai, H. Zhong, and M. Li, “End-to-end Language Identification using NetFV and NetVLAD,” in ISCSLP, 2018.
  • [21] W. Xie, A. Nagrani, J. S. Chung, and A. Zisserman, “Utterance-level Aggregation For Speaker Recognition In The Wild,” in ICASSP, 2019, pp. 5791–5795.
  • [22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in CVPR, 2016, pp. 770–778.
  • [23] A. Nagrani, J. S. Chung, and A. Zisserman, “Voxceleb: A Large-Scale Speaker Identification Dataset,” in Interspeech, 2017, pp. 2616–2620.
  • [24] D. Snyder, G. Chen, and D. Povey, “MUSAN: A Music, Speech, and Noise Corpus,” arXiv:1510.08484 [cs], 2015.
  • [25] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song,

    “Sphereface: Deep Hypersphere Embedding for Face Recognition,”

    in CVPR, 2017, pp. 212–220.
  • [26] L. Maaten and G. Hinton, “Visualizing Data using t-SNE,”

    Journal of Machine Learning Research

    , vol. 9, no. 11, pp. 2579–2605, 2008.