Target Speaker Extraction for Overlapped Multi-Talker Speaker Verification

02/07/2019
by   Wei Rao, et al.
0

The performance of speaker verification degrades significantly when the test speech is corrupted by interference speakers. Speaker diarization does well to separate speakers if the speakers are temporally overlapped. However, if multi-talkers speak at the same time, we need the technique to separate the speech in the spectral domain. This paper proposes an overlapped multi-talker speaker verification framework by using target speaker extraction methods. Specifically, given the target speaker information, the target speaker's speech is firstly extracted from the overlapped multi-talker speech by a target speaker extraction module. Then, the extracted speech is passed to the speaker verification system. Experimental results show that the proposed approach significantly improves the performance of overlapped multi-talker speaker verification and achieves 65.7

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/30/2021

Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech

Speaker verification has been studied mostly under the single-talker con...
11/26/2020

Improving RNN Transducer With Target Speaker Extraction and Neural Uncertainty Estimation

Target-speaker speech recognition aims to recognize target-speaker speec...
08/08/2020

Extrapolating false alarm rates in automatic speaker verification

Automatic speaker verification (ASV) vendors and corpus providers would ...
10/28/2017

Jointly Tracking and Separating Speech Sources Using Multiple Features and the generalized labeled multi-Bernoulli Framework

This paper proposes a novel joint multi-speaker tracking-and-separation ...
07/20/2021

A Real-time Speaker Diarization System Based on Spatial Spectrum

In this paper we describe a speaker diarization system that enables loca...
07/13/2020

DNN Speaker Tracking with Embeddings

In multi-speaker applications is common to have pre-computed models from...
05/03/2021

AvaTr: One-Shot Speaker Extraction with Transformers

To extract the voice of a target speaker when mixed with a variety of ot...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The performance of speaker verification is significantly degraded when the speech contains background noise and/or is corrupted by interference speakers. Speaker diarization is usually applied on the non-overlapped multi-talker speech by speaker segmentation and clustering [1]. It still works well by detecting and excluding the overlapped speech when the multi-talker speech is slightly overlapped [2, 3]. However, such system fails when multi-talkers speak at the same time.

One possible solution is to separate the multi-talker speech into different speakers using a speech separation system, such as deep clustering [4], deep attractor network [5], permutation invariant training [6, 7, 8], and so on. Although the performance of speech separation has been significantly improved by such approaches, the number of speakers has to be known in prior for these approaches. However, the number of speakers in the test speech is unknown in the real application of speaker verification.

To address the limitation that the number of speakers has to be know in prior, this paper proposes an overlapped multi-talker speaker verification framework by using the target speaker extraction. Given the target speaker information, the target speaker’s speech is extracted from the overlapped multi-talker speech by a target speaker extraction module. Although the target speaker extraction system needs the target speaker information, it will not be a limitation. Because the target speaker information is provided as enroll speech in speaker verification system. Then, the extracted speech is passed to the speaker verification system (i.e. i-vector/PLDA speaker verification system

[9, 10, 11, 12] in this work) to verify whether the extracted speech is belonging to the target speaker.

Moreover, this paper also compares the effectiveness of two target speaker extraction networks for the overlapped multi-talker speaker verification. They are SBF-MTSAL [13] and SBF-MTSAL-Concat [13]. Experimental results demonstrate that the proposed method in this paper significantly improve the performance of speaker verification on overlapped multi-talker speech. In addition, SBF-MTSAL-Concat outperforms SBF-MTSAL in the overlapped multi-talker speaker verification.

The remainder of the paper is organized as follows. Section 2 introduces our proposed overlapped multi-talker speaker verification framework in this paper. Section 3 and Section 4 report the experimental setup and results. Then, the conclusions and future works are presented in Section 5.

2 Multi-Talker Speaker Verification with Speaker Extraction

Since the target speaker information will be given in speaker verification, target speaker extraction is a good option to address the overlapped multi-talker speaker verification problem. Fig. 1 illustrates the framework of the proposed overlapped multi-talker speaker verification system with target speaker extraction. The framework consists of a target speaker extraction module and a speaker verification system. Specifically, given a trial, the enrollment utterance and the overlapped multi-talker test utterance are fed into the target speaker extraction network. Then, the target speaker’s speech is extracted from . The extracted speech and enrollment speech are used as inputs of the standard i-vector/PLDA speaker verification system [9, 10, 11, 12] to verify whether the extracted speech is belonging to the target speaker.

In this paper, we compare two target speaker extraction methods in the multi-talker speaker verification framework: (1) SpeakerBeam front-end with magnitude and temporal spectrum approximation loss (SBF-MTSAL) [13] and (2) SBF-MTSAL with concatenation framework (SBF-MTSAL-Concat) [13]. Both of these two methods are the extended methods of SpeakerBeam front-end (SBF) [14, 15].

Figure 1: The flow chart of overlapped multi-talker speaker verification system with target speaker extraction. “” represents the mixture speech. “” represents the enrollment speech. “” represents the extracted target speaker’s speech from . The target speaker extraction network takes the enrollment utterance of the target speaker as the auxiliary information to extract the speech component from that belongs to the target speaker.

Figure 2: The architecture of SBF-MTSAL. “AL” in the trapezium box represents the adaptation layer. “Sub” represents the sub-layer. “” represents the weight obtained from the auxiliary network. “” represents the number of sub-layers. “ Mixture” represents the magnitude of the mixture speech. “ Extracted” represents the output magnitude of the extracted target speaker’s speech. “ Reference” represents the magnitude of clean speech, which is used to simulate the mixture. “ Auxiliary” represents the magnitude of the auxiliary speech. During the evaluation, the upper right dotted box is not necessary.

Figure 3: The architecture of SBF-MTSAL-Concat. The meaning of “” can be referred to the caption of Fig. 2. During the evaluation, the upper right dotted box is not necessary.

2.1 Sbf-Mtsal

Fig. 2 shows the architecture of SBF-MTSAL [13]

. The SBF-MTSAL approach uses an auxiliary network to learn adaptation weights from the target speaker’s voice, which is different from the utterance of the target speaker in the mixture. The adaptation weights contain speaker characteristics and are used to weight the sub-layers in the adaptation layer of the mask estimation network with a CADNN structure. Instead of computing objective loss between ideal binary mask and estimated mask in the original work

[14], SBF-MTSAL computes a magnitude and temporal spectrum approximation loss to estimate a phase sensitive mask [16] due to its better performance. The magnitude and its dynamic information (i.e., delta and acceleration) are used in calculating the objective loss for temporal continuity.

2.2 SBF-MTSAL-Concat

Fig. 3 illustrates the architecture of SBF-MTSAL-Concat method [13]

. The auxiliary network learns speaker embedding from a different utterance of target speaker, which contains speaker characteristics. Then, the speaker embedding is repeated concatenated with the activation of a BLSTM in the mask estimation network. The concatenated representations containing the mixture and target speaker information are used to estimate a phase sensitive mask with the same loss function as in SBF-MTSAL method.

3 Experimental Setup

3.1 Speech Data

The two-speaker mixed dataset111The database simulation code is available at: https://github.com/xuchenglin28/speaker_extraction used to train the target speaker extraction network was simulated at a sampling rate of 8kHz based on the WSJ0 corpus [17]. In the simulation of two-speaker mixture, the first selected speaker was chosen as target speaker, the other one was interference speaker. The utterance of the target speaker from the original WSJ0 corpus was used as reference speech. Another utterance of this target speaker, which was different from the reference speech, was randomly selected to be used as input to the auxiliary network to obtain target speaker information.

The simulated dataset was divided into training set ( utterances), development set ( utterances), and test set ( utterances). Specifically, the utterances from male and female speakers in the WSJ0 “si_tr_s” set were randomly selected to generate the training and development set. The SNR of each mixture was randomly selected between 0dB and 5dB. Similarly, the test set was created by randomly mixing the utterances from male and female speakers in the WSJ0 “si_dt_05” and “si_et_05” sets. Since the speakers in the test set were different from the training and development sets, the test set was used to evaluate the speaker verification performance.

3.2 Target Speaker Extraction Network Setup

A short-time Fourier transform (STFT) was used with a window length of

ms and a shift of ms to obtain the magnitude features from both of the input mixture for mask estimation network and input target speech for auxiliary network. The normalized square root hamming window was applied.

The learning rate started from and scaled down by when the training loss increased on the development set. The minibatch size was set to . The network was trained with minimum epochs and stopped when the relative loss reduction was lower than . The Adam algorithm [18] was used to optimize the network.

The aforementioned magnitude extraction configuration and network training scheme were kept same in both SBF-MTSAL and SBF-MTSAL-Concat methods. The extracted magnitude were reconstructed into time-domain signal with phase of the mixture. Then the time-domain signal was used as input to the speaker verification system.

For SBF-MTSAL, the auxiliary network was composed of

feed-forward relu hidden layers with

hidden nodes and a linear layer with hidden nodes. The adaptation weights were obtained by averaging these dimensional outputs over all the frames. The mask estimation network used a BLSTM with cells in each forward and backward direction. The following adaptation layer had sub-layers. Each sub-layer had nodes with dimensional inputs from the outputs of the previous BLSTM. The dimensional weights from the auxiliary network were used to weight these sub-layers, respectively. Then the activation of the adaptation layer was summed over all the sub-layers. Another feed-forward relu hidden layers with nodes were appended. The mask layer had nodes to predict the mask for the target speaker.

System No. Training Eval. TSE EER DCF08 DCF10
1 (Baseline) Clean Mixture No 22.67 0.867 0.915
2 Clean+Ext Mixture No 21.67 0.850 0.898
3 Clean Mixture SBF-MTSAL 11.17 0.760 0.844
4 Clean Mixture SBF-MTSAL-Concat 10.40 0.736 0.813
5 Clean+Ext Mixture SBF-MTSAL-Concat 7.77 0.631 0.747
6 (Upper Bound) Clean Clean No 3.33 0.357 0.454
7 Clean+Ext Clean No 3.07 0.377 0.524
Table 1: Performance of SV system without and with target speaker extraction. “Training” represents the type of training data for SV system. “Eval.” represents the type of evaluation test data for SV system. “TSE” represents whether perform the target speaker extraction for SV. If the option of this column is No, it means that no speaker extraction is applied in the SV. If the option is SBF-MTSAL/SBF-MTSAL-Concat, it means that SBF-MTSAL/SBF-MTSAL-Concat speaker extraction is used for SV. “Clean” represents the speech from one speaker. “Mixture” means the overlapped multi-talker’s speech. “Ext” represents the extracted target speaker’s speech from the mixture by target speaker extraction network. “Clean+Ext” means pooling the clean speeches and extracted target speaker’s speeches to train the SV system. “Baseline” represents the baseline performance of overlapped multi-talker speaker verification system. “Upper Bound” represents the upper bound performance of target speaker extraction for overlapped multi-talker speaker verification. “DCF08” represents the minimum detection cost with . “DCF10” represents the minimum detection cost with . The details of experimental setup can be referred to section 3.3.

For SBF-MTSAL-Concat, the auxiliary had a BLSTM with cells in each forward and backward direction, a feed-forward relu hidden layer with nodes and a linear layer with nodes. The output of the linear layer was averaged over all frames to obtain a dimensional speaker embedding containing target speaker characteristics. The speaker embedding was repeatedly concatenated with the activation of the BLSTM layer in the mask estimation network. The BLSTM had cells in each forward and backward direction. Then the concatenated outputs were fed to a feed-forward relu hidden layer, a BLSTM layer and another feed-foreard relu hidden layer. The BLSTM had cells and the relu layers had nodes. The mask layer had nodes.

3.3 Speaker Verification (SV) System

According to the test set of simulated dataset, we generated 3000 target trials and 48,000 non-target trials for the SV evaluation. In the evaluation trials, each enrollment utterance contained a contiguous speech segment from a single speaker and test utterance contained the overlapped speech from multiple speakers. We called this evaluation set as mixture evaluation set. Moreover, to show the upper bound of target speaker extraction on SV, we also generated another evaluation set with 51,000 trials from WSJ0 corpus according to the information of mixture set. This set was called as clean evaluation set222The SV evaluation trials and keys for clean and mixture evaluation sets are available at: https://github.com/xuchenglin28/speaker_extraction.

We selected 8,769 utterances from 101 speakers in WSJ0 corpus which were used to generate the training set of simulated database for training UBM, total variability matrix, LDA, and PLDA models. This set was named as clean training set. Because this paper directly used the extracted target speaker’s speech for SV, it would cause the mismatch between extracted speech and clean speech. To solve this mismatched problem, we pooled 5,000 extracted speech from the development set of the simulated 2-speaker mixed dataset and clean training set to train speaker verification system. We called this training set as clean+ext set. Section 4 will show the performance by using different training and evaluation set.

The features of SV system were based on 19 MFCCs together with energy plus their 1st- and 2nd-derivatives extracted from the speech regions, followed by cepstral mean normalization [19] with a window size of 3 seconds. A 60-dimensional acoustic vector is extracted every 10ms, using a Hamming windowing of 25ms. An energy based voice activity detection method is used to remove silence frames. The system was based on a gender-independent UBM with 512 mixtures. The training set described in the previous paragraph was used for estimating the UBM and total variability matrix with 400 total factors. The same data set was used for estimating the LDA and Gaussian PLDA models with 150 latent variables.

4 Experimental Results

To investigate the effect of overlapped test speech on speaker verification, we perform the speaker verification experiments on both mixture and clean evaluation set described in section 3.3. System 1 of table 1 is the baseline system of SV with clean training data on mixture test set. System 6 of table 1 shows the upper bound performance (also called as ideal performance) of target speaker extraction for overlapped multi-talker SV. Performance comparison between system 1 and system 6 of table 1 shows that the performance speaker verification system seriously degrades when the test speech is the fully overlapped multi-talker speech.

Table 1 also presents the performance of speaker verification system on the evaluation set without and with target speaker extraction. System 1 of table 1 is the baseline results of overlapped multi-talker speaker verification. System 3 to 5 of table 1 show the performance of speaker verification after using target speaker extraction. Results of system 1 to 5 demonstrate that applying target speaker extraction significantly improve the performance of overlapped multi-talker speaker verification. Specifically, System 5 of Table 1 can obtain around relative reduction over the baseline (system 1) on EER, DCF08, and DCF10, respectively.

This paper compares two target speaker extraction methods for overlapped multi-talker speaker verification: (1) SBF-MTSAL and (2) SBF-MTSAL-Concat. Performance comparison between system 3 and 4 of table 1 shows that SBF-MTSAL-Concat outperforms SBF-MTSAL on both EER and DCFs.

To alleviate the effect of the mismatch between extracted speech and clean speech for SV, we combine the clean training set and extracted speech data (Clean+Ext) to train speaker verification system. Both clean and mixture test sets are used to evaluation this speaker verification system. Additionally, because SBF-MTSAL-Concat achieves the better performance, we only apply this experiment on SBF-MTSAL-Concat. System 2, 5, and 7 in table 1 show the performance of SV system with clean+ext training data. System 2 of table 1 proves that most of improvement on the overlapped multi-talker speaker verification is done by target speaker extraction methods by comparing with system 1 and 3 to 5. Comparison between system 6 and 7 in table 1 shows that Clean+Ext training set will improve the performance of speaker verification on the clean test set in EER, but degrade the DCFs. And performance comparison between system 4 and 5 in table 1 demonstrates that Clean+Ext training set could advance the speaker verification performance with SBF-MTSAL-Concat on the mixture test set.

5 Conclusions and Future Works

This paper applies the target speaker extraction to improve the performance of overlapped multi-talker speaker verification. Experimental results show that the proposed method could significantly improve the performance of overlapped multi-talker speaker verification. This paper also compares the performance of SBF-MTSAL and SBF-MTSAL-Concat on overlapped multi-talker speaker verification and finds that SBF-MTSAL-Concat achieves better performance than SBF-MTSAL.

This paper mainly focuses on the fully overlapped test speech from multiple speakers. In the future, we will investigate the effectiveness of proposed method when the enrollment is also a multi-talker speech, apply the proposed method on the public available speaker verification database, and explore the joint training of target speaker extraction and speaker verification.

References

  • [1] Xavier Anguera, Simon Bozonnet, Nicholas Evans, Corinne Fredouille, Gerald Friedland, and Oriol Vinyals, “Speaker diarization: A review of recent research,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 2, pp. 356–370, 2012.
  • [2] Delphine Charlet, Claude Barras, and Jean-Sylvain Liénard, “Impact of overlapping speech detection on speaker diarization for broadcast news and debates,” in Proc. of ICASSP. IEEE, 2013, pp. 7707–7711.
  • [3] Sree Harsha Yella and Hervé Bourlard, “Overlapping speech detection using long-term conversational features for speaker diarization in meeting room conversations,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 12, pp. 1688–1700, 2014.
  • [4] John R Hershey, Zhuo Chen, Jonathan Le Roux, and Shinji Watanabe, “Deep clustering: Discriminative embeddings for segmentation and separation,” in Proceedings of ICASSP. IEEE, 2016, pp. 31–35.
  • [5] Zhuo Chen, Yi Luo, and Nima Mesgarani, “Deep attractor network for single-microphone speaker separation,” in Proceedings of ICASSP. IEEE, 2017, pp. 246–250.
  • [6] Morten Kolbæk, Dong Yu, Zheng-Hua Tan, and Jesper Jensen,

    “Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks,”

    IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 10, pp. 1901–1913, 2017.
  • [7] Chenglin Xu, Wei Rao, Xiong Xiao, Eng Siong Chng, and Haizhou Li, “Single channel speech separation with constrained utterance level permutation invariant training using grid lstm,” in Proceedings of ICASSP. IEEE, 2018.
  • [8] Chenglin Xu, Wei Rao, Eng Siong Chng, and Haizhou Li, “A shifted delta coefficient objective for monaural speech separation using multi-task learning,” in Proceedings of Interspeech, 2018, pp. 3479–3483.
  • [9] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel, and P. Ouellet, “Front-end factor analysis for speaker verification,” IEEE Trans. on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, May 2011.
  • [10] P. Kenny, “Bayesian speaker verification with heavy-tailed priors,” in Proc. of Odyssey: Speaker and Language Recognition Workshop, Brno, Czech Republic, Jun. 2010.
  • [11] S.J.D. Prince and J.H. Elder, “Probabilistic linear discriminant analysis for inferences about identity,” in

    Proc. of 11th International Conference on Computer Vision

    , Rio de Janeiro, Brazil, Oct. 2007, pp. 1–8.
  • [12] D. Garcia-Romero and C. Y. Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems,” in Proc. of Interspeech 2011, Florence, Italy, Aug. 2011, pp. 249–252.
  • [13] Chenglin Xu, Wei Rao, Eng Siong Chng, and Haizhou Li, “Optimization of speaker extraction neural network with magnitude and temporal spectrum approximation loss,” Accepted in ICASSP 2019.
  • [14] Marc Delcroix, Katerina Zmolikova, Keisuke Kinoshita, Atsunori Ogawa, and Tomohiro Nakatani, “Single channel target speaker extraction and recognition with speaker beam,” in Proceedings of ICASSP. IEEE, 2018, pp. 5554–5558.
  • [15] Marc Delcroix, Keisuke Kinoshita, Chengzhu Yu, Atsunori Ogawa, Takuya Yoshioka, and Tomohiro Nakatani, “Context adaptive deep neural networks for fast acoustic model adaptation in noisy conditions,” in Proceedings of ICASSP. IEEE, 2016, pp. 5270–5274.
  • [16] Hakan Erdogan, John R Hershey, Shinji Watanabe, and Jonathan Le Roux, “Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks,” in Proceedings of ICASSP. IEEE, 2015, pp. 708–712.
  • [17] John Garofolo, D Graff, D Paul, and D Pallett, “Csr-i (wsj0) complete ldc93s6a,” Web Download. Philadelphia: Linguistic Data Consortium, 1993.
  • [18] Diederik Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [19] B. S. Atal, “Effectiveness of linear prediction characteristics of the speech wave for automatic speaker identification and verification,” J. Acoust. Soc. Am., vol. 55, no. 6, pp. 1304–1312, Jun. 1974.