The performance of speaker verification is significantly degraded when the speech contains background noise and/or is corrupted by interference speakers. Speaker diarization is usually applied on the non-overlapped multi-talker speech by speaker segmentation and clustering . It still works well by detecting and excluding the overlapped speech when the multi-talker speech is slightly overlapped [2, 3]. However, such system fails when multi-talkers speak at the same time.
One possible solution is to separate the multi-talker speech into different speakers using a speech separation system, such as deep clustering , deep attractor network , permutation invariant training [6, 7, 8], and so on. Although the performance of speech separation has been significantly improved by such approaches, the number of speakers has to be known in prior for these approaches. However, the number of speakers in the test speech is unknown in the real application of speaker verification.
To address the limitation that the number of speakers has to be know in prior, this paper proposes an overlapped multi-talker speaker verification framework by using the target speaker extraction. Given the target speaker information, the target speaker’s speech is extracted from the overlapped multi-talker speech by a target speaker extraction module. Although the target speaker extraction system needs the target speaker information, it will not be a limitation. Because the target speaker information is provided as enroll speech in speaker verification system. Then, the extracted speech is passed to the speaker verification system (i.e. i-vector/PLDA speaker verification system[9, 10, 11, 12] in this work) to verify whether the extracted speech is belonging to the target speaker.
Moreover, this paper also compares the effectiveness of two target speaker extraction networks for the overlapped multi-talker speaker verification. They are SBF-MTSAL  and SBF-MTSAL-Concat . Experimental results demonstrate that the proposed method in this paper significantly improve the performance of speaker verification on overlapped multi-talker speech. In addition, SBF-MTSAL-Concat outperforms SBF-MTSAL in the overlapped multi-talker speaker verification.
2 Multi-Talker Speaker Verification with Speaker Extraction
Since the target speaker information will be given in speaker verification, target speaker extraction is a good option to address the overlapped multi-talker speaker verification problem. Fig. 1 illustrates the framework of the proposed overlapped multi-talker speaker verification system with target speaker extraction. The framework consists of a target speaker extraction module and a speaker verification system. Specifically, given a trial, the enrollment utterance and the overlapped multi-talker test utterance are fed into the target speaker extraction network. Then, the target speaker’s speech is extracted from . The extracted speech and enrollment speech are used as inputs of the standard i-vector/PLDA speaker verification system [9, 10, 11, 12] to verify whether the extracted speech is belonging to the target speaker.
In this paper, we compare two target speaker extraction methods in the multi-talker speaker verification framework: (1) SpeakerBeam front-end with magnitude and temporal spectrum approximation loss (SBF-MTSAL)  and (2) SBF-MTSAL with concatenation framework (SBF-MTSAL-Concat) . Both of these two methods are the extended methods of SpeakerBeam front-end (SBF) [14, 15].
. The SBF-MTSAL approach uses an auxiliary network to learn adaptation weights from the target speaker’s voice, which is different from the utterance of the target speaker in the mixture. The adaptation weights contain speaker characteristics and are used to weight the sub-layers in the adaptation layer of the mask estimation network with a CADNN structure. Instead of computing objective loss between ideal binary mask and estimated mask in the original work, SBF-MTSAL computes a magnitude and temporal spectrum approximation loss to estimate a phase sensitive mask  due to its better performance. The magnitude and its dynamic information (i.e., delta and acceleration) are used in calculating the objective loss for temporal continuity.
. The auxiliary network learns speaker embedding from a different utterance of target speaker, which contains speaker characteristics. Then, the speaker embedding is repeated concatenated with the activation of a BLSTM in the mask estimation network. The concatenated representations containing the mixture and target speaker information are used to estimate a phase sensitive mask with the same loss function as in SBF-MTSAL method.
3 Experimental Setup
3.1 Speech Data
The two-speaker mixed dataset111The database simulation code is available at: https://github.com/xuchenglin28/speaker_extraction used to train the target speaker extraction network was simulated at a sampling rate of 8kHz based on the WSJ0 corpus . In the simulation of two-speaker mixture, the first selected speaker was chosen as target speaker, the other one was interference speaker. The utterance of the target speaker from the original WSJ0 corpus was used as reference speech. Another utterance of this target speaker, which was different from the reference speech, was randomly selected to be used as input to the auxiliary network to obtain target speaker information.
The simulated dataset was divided into training set ( utterances), development set ( utterances), and test set ( utterances). Specifically, the utterances from male and female speakers in the WSJ0 “si_tr_s” set were randomly selected to generate the training and development set. The SNR of each mixture was randomly selected between 0dB and 5dB. Similarly, the test set was created by randomly mixing the utterances from male and female speakers in the WSJ0 “si_dt_05” and “si_et_05” sets. Since the speakers in the test set were different from the training and development sets, the test set was used to evaluate the speaker verification performance.
3.2 Target Speaker Extraction Network Setup
A short-time Fourier transform (STFT) was used with a window length ofms and a shift of ms to obtain the magnitude features from both of the input mixture for mask estimation network and input target speech for auxiliary network. The normalized square root hamming window was applied.
The learning rate started from and scaled down by when the training loss increased on the development set. The minibatch size was set to . The network was trained with minimum epochs and stopped when the relative loss reduction was lower than . The Adam algorithm  was used to optimize the network.
The aforementioned magnitude extraction configuration and network training scheme were kept same in both SBF-MTSAL and SBF-MTSAL-Concat methods. The extracted magnitude were reconstructed into time-domain signal with phase of the mixture. Then the time-domain signal was used as input to the speaker verification system.
For SBF-MTSAL, the auxiliary network was composed of
feed-forward relu hidden layers withhidden nodes and a linear layer with hidden nodes. The adaptation weights were obtained by averaging these dimensional outputs over all the frames. The mask estimation network used a BLSTM with cells in each forward and backward direction. The following adaptation layer had sub-layers. Each sub-layer had nodes with dimensional inputs from the outputs of the previous BLSTM. The dimensional weights from the auxiliary network were used to weight these sub-layers, respectively. Then the activation of the adaptation layer was summed over all the sub-layers. Another feed-forward relu hidden layers with nodes were appended. The mask layer had nodes to predict the mask for the target speaker.
|6 (Upper Bound)||Clean||Clean||No||3.33||0.357||0.454|
For SBF-MTSAL-Concat, the auxiliary had a BLSTM with cells in each forward and backward direction, a feed-forward relu hidden layer with nodes and a linear layer with nodes. The output of the linear layer was averaged over all frames to obtain a dimensional speaker embedding containing target speaker characteristics. The speaker embedding was repeatedly concatenated with the activation of the BLSTM layer in the mask estimation network. The BLSTM had cells in each forward and backward direction. Then the concatenated outputs were fed to a feed-forward relu hidden layer, a BLSTM layer and another feed-foreard relu hidden layer. The BLSTM had cells and the relu layers had nodes. The mask layer had nodes.
3.3 Speaker Verification (SV) System
According to the test set of simulated dataset, we generated 3000 target trials and 48,000 non-target trials for the SV evaluation. In the evaluation trials, each enrollment utterance contained a contiguous speech segment from a single speaker and test utterance contained the overlapped speech from multiple speakers. We called this evaluation set as mixture evaluation set. Moreover, to show the upper bound of target speaker extraction on SV, we also generated another evaluation set with 51,000 trials from WSJ0 corpus according to the information of mixture set. This set was called as clean evaluation set222The SV evaluation trials and keys for clean and mixture evaluation sets are available at: https://github.com/xuchenglin28/speaker_extraction.
We selected 8,769 utterances from 101 speakers in WSJ0 corpus which were used to generate the training set of simulated database for training UBM, total variability matrix, LDA, and PLDA models. This set was named as clean training set. Because this paper directly used the extracted target speaker’s speech for SV, it would cause the mismatch between extracted speech and clean speech. To solve this mismatched problem, we pooled 5,000 extracted speech from the development set of the simulated 2-speaker mixed dataset and clean training set to train speaker verification system. We called this training set as clean+ext set. Section 4 will show the performance by using different training and evaluation set.
The features of SV system were based on 19 MFCCs together with energy plus their 1st- and 2nd-derivatives extracted from the speech regions, followed by cepstral mean normalization  with a window size of 3 seconds. A 60-dimensional acoustic vector is extracted every 10ms, using a Hamming windowing of 25ms. An energy based voice activity detection method is used to remove silence frames. The system was based on a gender-independent UBM with 512 mixtures. The training set described in the previous paragraph was used for estimating the UBM and total variability matrix with 400 total factors. The same data set was used for estimating the LDA and Gaussian PLDA models with 150 latent variables.
4 Experimental Results
To investigate the effect of overlapped test speech on speaker verification, we perform the speaker verification experiments on both mixture and clean evaluation set described in section 3.3. System 1 of table 1 is the baseline system of SV with clean training data on mixture test set. System 6 of table 1 shows the upper bound performance (also called as ideal performance) of target speaker extraction for overlapped multi-talker SV. Performance comparison between system 1 and system 6 of table 1 shows that the performance speaker verification system seriously degrades when the test speech is the fully overlapped multi-talker speech.
Table 1 also presents the performance of speaker verification system on the evaluation set without and with target speaker extraction. System 1 of table 1 is the baseline results of overlapped multi-talker speaker verification. System 3 to 5 of table 1 show the performance of speaker verification after using target speaker extraction. Results of system 1 to 5 demonstrate that applying target speaker extraction significantly improve the performance of overlapped multi-talker speaker verification. Specifically, System 5 of Table 1 can obtain around relative reduction over the baseline (system 1) on EER, DCF08, and DCF10, respectively.
This paper compares two target speaker extraction methods for overlapped multi-talker speaker verification: (1) SBF-MTSAL and (2) SBF-MTSAL-Concat. Performance comparison between system 3 and 4 of table 1 shows that SBF-MTSAL-Concat outperforms SBF-MTSAL on both EER and DCFs.
To alleviate the effect of the mismatch between extracted speech and clean speech for SV, we combine the clean training set and extracted speech data (Clean+Ext) to train speaker verification system. Both clean and mixture test sets are used to evaluation this speaker verification system. Additionally, because SBF-MTSAL-Concat achieves the better performance, we only apply this experiment on SBF-MTSAL-Concat. System 2, 5, and 7 in table 1 show the performance of SV system with clean+ext training data. System 2 of table 1 proves that most of improvement on the overlapped multi-talker speaker verification is done by target speaker extraction methods by comparing with system 1 and 3 to 5. Comparison between system 6 and 7 in table 1 shows that Clean+Ext training set will improve the performance of speaker verification on the clean test set in EER, but degrade the DCFs. And performance comparison between system 4 and 5 in table 1 demonstrates that Clean+Ext training set could advance the speaker verification performance with SBF-MTSAL-Concat on the mixture test set.
5 Conclusions and Future Works
This paper applies the target speaker extraction to improve the performance of overlapped multi-talker speaker verification. Experimental results show that the proposed method could significantly improve the performance of overlapped multi-talker speaker verification. This paper also compares the performance of SBF-MTSAL and SBF-MTSAL-Concat on overlapped multi-talker speaker verification and finds that SBF-MTSAL-Concat achieves better performance than SBF-MTSAL.
This paper mainly focuses on the fully overlapped test speech from multiple speakers. In the future, we will investigate the effectiveness of proposed method when the enrollment is also a multi-talker speech, apply the proposed method on the public available speaker verification database, and explore the joint training of target speaker extraction and speaker verification.
-  Xavier Anguera, Simon Bozonnet, Nicholas Evans, Corinne Fredouille, Gerald Friedland, and Oriol Vinyals, “Speaker diarization: A review of recent research,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 2, pp. 356–370, 2012.
-  Delphine Charlet, Claude Barras, and Jean-Sylvain Liénard, “Impact of overlapping speech detection on speaker diarization for broadcast news and debates,” in Proc. of ICASSP. IEEE, 2013, pp. 7707–7711.
-  Sree Harsha Yella and Hervé Bourlard, “Overlapping speech detection using long-term conversational features for speaker diarization in meeting room conversations,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 12, pp. 1688–1700, 2014.
-  John R Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe, “Deep clustering: Discriminative embeddings for segmentation and separation,” in Proceedings of ICASSP. IEEE, 2016, pp. 31–35.
-  Zhuo Chen, Yi Luo, and Nima Mesgarani, “Deep attractor network for single-microphone speaker separation,” in Proceedings of ICASSP. IEEE, 2017, pp. 246–250.
Morten Kolbæk, Dong Yu, Zheng-Hua Tan, and Jesper Jensen,
“Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks,”IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 10, pp. 1901–1913, 2017.
-  Chenglin Xu, Wei Rao, Xiong Xiao, Eng Siong Chng, and Haizhou Li, “Single channel speech separation with constrained utterance level permutation invariant training using grid lstm,” in Proceedings of ICASSP. IEEE, 2018.
-  Chenglin Xu, Wei Rao, Eng Siong Chng, and Haizhou Li, “A shifted delta coefficient objective for monaural speech separation using multi-task learning,” in Proceedings of Interspeech, 2018, pp. 3479–3483.
-  N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, May 2011.
-  P. Kenny, “Bayesian speaker verification with heavy-tailed priors,” in Proc. of Odyssey: Speaker and Language Recognition Workshop, Brno, Czech Republic, Jun. 2010.
S.J.D. Prince and J.H. Elder,
“Probabilistic linear discriminant analysis for inferences about
Proc. of 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, Oct. 2007, pp. 1–8.
-  D. Garcia-Romero and C. Y. Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems,” in Proc. of Interspeech 2011, Florence, Italy, Aug. 2011, pp. 249–252.
-  Chenglin Xu, Wei Rao, Eng Siong Chng, and Haizhou Li, “Optimization of speaker extraction neural network with magnitude and temporal spectrum approximation loss,” Accepted in ICASSP 2019.
-  Marc Delcroix, Katerina Zmolikova, Keisuke Kinoshita, Atsunori Ogawa, and Tomohiro Nakatani, “Single channel target speaker extraction and recognition with speaker beam,” in Proceedings of ICASSP. IEEE, 2018, pp. 5554–5558.
-  Marc Delcroix, Keisuke Kinoshita, Chengzhu Yu, Atsunori Ogawa, Takuya Yoshioka, and Tomohiro Nakatani, “Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions,” in Proceedings of ICASSP. IEEE, 2016, pp. 5270–5274.
-  Hakan Erdogan, John R Hershey, Shinji Watanabe, and Jonathan Le Roux, “Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks,” in Proceedings of ICASSP. IEEE, 2015, pp. 708–712.
-  John Garofolo, D Graff, D Paul, and D Pallett, “Csr-i (wsj0) complete ldc93s6a,” Web Download. Philadelphia: Linguistic Data Consortium, 1993.
-  Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  B. S. Atal, “Effectiveness of linear prediction characteristics of the speech wave for automatic speaker identification and verification,” J. Acoust. Soc. Am., vol. 55, no. 6, pp. 1304–1312, Jun. 1974.