End-to-End Speaker Diarization for an Unknown Number of Speakers with Encoder-Decoder Based Attractors

05/20/2020
by   Shota Horiguchi, et al.
hitachi
IEEE
0

End-to-end speaker diarization for an unknown number of speakers is addressed in this paper. Recently proposed end-to-end speaker diarization outperformed conventional clustering-based speaker diarization, but it has one drawback: it is less flexible in terms of the number of speakers. This paper proposes a method for encoder-decoder based attractor calculation (EDA), which first generates a flexible number of attractors from a speech embedding sequence. Then, the generated multiple attractors are multiplied by the speech embedding sequence to produce the same number of speaker activities. The speech embedding sequence is extracted using the conventional self-attentive end-to-end neural speaker diarization (SA-EEND) network. In a two-speaker condition, our method achieved a 2.69 4.56 method attained a 15.29 method achieved a 19.43

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/20/2021

Encoder-Decoder Based Attractor Calculation for End-to-End Neural Diarization

This paper investigates an end-to-end neural diarization (EEND) method f...
03/31/2022

EEND-SS: Joint End-to-End Neural Speaker Diarization and Speech Separation for Flexible Number of Speakers

In this paper, we present a novel framework that jointly performs speake...
06/02/2020

Neural Speaker Diarization with Speaker-Wise Chain Rule

Speaker diarization is an essential step for processing multi-speaker au...
05/19/2021

Advances in integration of end-to-end neural and clustering-based diarization for real conversational speech

Recently, we proposed a novel speaker diarization method called End-to-E...
07/04/2021

Towards Neural Diarization for Unlimited Numbers of Speakers Using Global and Local Attractors

Attractor-based end-to-end diarization is achieving comparable accuracy ...
12/14/2021

End-to-end speaker diarization with transformer

Speaker diarization is connected to semantic segmentation in computer vi...
05/22/2020

Identify Speakers in Cocktail Parties with End-to-End Attention

In scenarios where multiple speakers talk at the same time, it is import...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Speaker diarization is the task to estimate “who spoke when” from an audio recording. It is a key technology for various applications using automatic speech recognition (ASR) in multi-talker scenarios such as telephone conversations

[kenny2010diarization], meetings [anguera2007acoustic], conferences and lectures [zhu2007multi], TV shows [vallet2012multimodal], and movies [kapsouras2017multimodal]. Accurate diarization has been proven to improve ASR performance by constraining a speech mask when constructing a beamformer for speech separation [kanda2019guided, zorila2019investigation].

One major approach for speaker diarization is the clustering-based method [shum2013unsupervised, sell2014speaker]

, which applies the following processes to an input audio one by one: speech activity detection, speech segmentation, feature extraction, and clustering. Progress on better speaker embeddings, such as x-vectors

[snyder2019speaker, diez2019bayesian] and d-vectors [wang2018speaker, zhang2019fully], have enabled accurate clustering-based diarization. However, most clustering-based approaches (except for a few studies, e.g., [huang2020speaker]) cannot deal with speaker overlap because each time slot is assigned to one speaker.

End-to-end speaker diarization called EEND [fujita2019end1, fujita2019end2] has been proposed to overcome this situation. The EEND is optimized to calculate diarization results for every speaker in a mixture from input audio features using permutation invariant training (PIT) [yu2017permutation]. The EEND, especially self-attentive EEND (SA-EEND), showed the effectiveness of end-to-end training of the diarization model by outperforming conventional clustering-based methods. One drawback it has is that the maximum number of speakers is pre-determined by the network architecture, and it cannot deal with a case where the number of speakers is higher. On this point, EEND is less flexible than clustering-based methods, where the number of speakers can be easily changed by setting the number of clusters during inferences.

This paper proposes an encoder-decoder based attractor calculation method called EDA. It determines a flexible number of—and theoretically an infinite number of attractors—from a speech embedding sequence. We applied it to SA-EEND to enable diarization with a flexible number of speakers. Then, the diarization results are calculated using dot products between all pairs of attractors and embeddings. Evaluation results on both simulated mixtures and real recordings showed that our method achieved better results with both fixed and unknown numbers of speakers than the x-vector-based clustering method and conventional SA-EEND.

2 Related work

Several methods in the context of speech separation can process speech mixtures of a flexible number of speakers. One series of methods involve applying the one-vs-rest approach iteratively [kinoshita2018listening, shi2018listen, von2019all, takahashi2019recursive]. However, it has a major drawback in that the calculation is conducted until all the speakers are extracted, so the computational time increases linearly as the number of speakers increases. Another series involve attractor-based approaches including Deep Attractor Network (DANet) [chen2017deep]. It does not limit the number of speakers in the inference phase; however, the number of speakers has to be known a priori. Anchored DANet [luo2018speaker] successfully solved the aforementioned problems, but it always requires calculating dot products between all the possible selections of anchors and extracted embeddings even in the inference phase. Thus, it is not scalable in terms of the number of speakers.

Several efforts have been made to calculate representatives from an embedding sequence in an end-to-end manner. Lee et al. proposed Set Transformer to implement set-to-set transformation [lee2019set], but the number of outputs has to be defined beforehand. Meier et al. implemented end-to-end clustering by estimating the distribution for every possible number of clusters [meier2018learning] so that the maximum number is limited by the network architecture. Li et al. proposed encoder-decoder based clustering for speaker diarization [li2019discriminative], which is the most related to EDA. However, the output is a sequence of cluster numbers of each input, so each timeslot is assigned to one cluster; therefore, it cannot deal with speaker overlap. Our proposed EDA, in contrast, determines a flexible number of attractors from an embedding sequence without prior knowledge about the number of clusters.

3 End-to-end neural diarization: Review

Here we briefly introduce our end-to-end diarization framework named EEND [fujita2019end1, fujita2019end2]. The EEND takes a

-length sequence of log-scaled Mel-filterbank based features as an input, and processes it using bi-directional long short-term memory (BLSTM)

[fujita2019end1] or Transformer encoders [fujita2019end2] to obtain an embedding

at each time slot. After that, a linear transformation

with an element-wise sigmoid function is applied to calculate posteriors

of speakers at time slot . In the training phase, the EEND is optimized using the PIT scheme [yu2017permutation], i.e., the loss is calculated between and the groundtruth labels as follows:

(1)

where is the set of all the possible permutation of speakers, is the permuted labels at , and is the binary cross entropy determined as follows:

(2)

4 Proposed method

The EEND has a critical problem, in that the output size is limited by the network architecture; the linear transformation restricts the number of speakers during inference. Therefore, it cannot deal with a case where the input mixture contains a higher number of speakers than the capacity. Therefore, we utilized an attractor-based method. To make our method end-to-end trainable, we designed Encoder-Decoder based Attractor calculation (EDA) to determine attractors from an embedding sequence. The overview of our proposed method is shown in Figure 1. We used the same self-attentive network in [fujita2019end2] as a backbone to obtain an embedding at each time slot. In this section, we explain how we calculate a flexible number of attractors from the embeddings and obtain diarization results using the attractors.

4.1 Encoder-decoder based attractor calculation

To calculate a flexible number of attractor points from variable lengths of embedding sequences, we utilize LSTM-based encoder-decoder [sutskever2014sequence]. A sequence of -dimensional embeddings is fed into the unidirectional LSTM encoder, obtaining the final hidden state embedding and the cell state :

(3)

Next, time-invariant -dimensional attractors are calculated using an unidirectional LSTM decoder with the initial states and as follows.

(4)

We use a -dimensional zero vector

as the input for the decoder at each decoding step. Theoretically infinite numbers of attractors can be calculated using the LSTM decoder. The probability whether of whether or not the attractor

exists to determine when to stop the attractor calculation is computed using a fully-connected layer with a sigmoid function as

(5)

where and are the trainable weights and bias of the fully-connected layer, respectively.

Figure 1: SA-EEND with encoder-decoder based attractor calculation.

We note that the output attractors depend on the order of the input embeddings because we use LSTMs for the EDA. To investigate the effect of the input order, we used two types of embedding order. One was a chronological order, i.e., the embeddings were sorted by timeslot indices. The other was a shuffled order. In this case, we used a shuffled order of embeddings, namely , where is one of the permutations of , for the input to the EDA.

In the training phase, we defined the groundtruth labels using the actual number of speakers as follows:

(6)

Also the attractor existence loss between the labels and the estimated probabilities were calculated using the binary cross entropy in Equation 2 as

(7)

In the inference phase, if the number of speakers was given, we use the first attractors, which were the output from the EDA. If the number of speakers was unknown, we first estimated it using

(8)

with a given threshold and then used the first attractors.

4.2 Speaker diarization using EDA

We respectively define the matrix formulations of the embeddings extracted from the SA-EEND and the attractors from the EDA as follows.

(9)
(10)

The posterior probabilities can be calculated using the inner product of every embedding-attractor pair as follows:

(11)

where is the element-wise sigmoid function. Note that the output size was determined using the number of attractors so that our method could output the diarization results of a flexible number of speakers. Finally, diarization loss was calculated in the same way as SA-EEND using the PIT found in Equation 1.

The total loss is defined by the diarization loss in Equation 1 and the attractor existence loss in Equation 7 as follows:

(12)

where is the weighting parameter. In this study, was set to when the simulated data were used for training and for adaptation on real datasets.

5 Experiments

5.1 Data

For the training and evaluation, we used simulated mixtures created from Switchboard-2 (Phase I & II & III), Switchboard Cellular (Part 1 & 2), and the NIST Speaker Recognition Evaluation (2004 & 2005 & 2006 & 2008) for speech and the MUSAN corpus [snyder2015musan] for noise with simulated room impulse responses used in [ko2017study] following the procedure in [fujita2019end2]. We note that the speaker sets for the training and test datasets were not overlapped. In [fujita2019end2], only the 2-speaker dataset was constructed. In this study, we created 1-, 3-, and 4-speaker datasets with similar overlap ratios to the 2-speaker mixtures. We also used the telephone conversation dataset CALLHOME (CH) [callhome], the dialogue recordings from the Corpus of Spontaneous Japanese (CSJ) [maekawa2003corpus], and the dataset used for the second DIHARD challenge [ryant2019second] to evaluate the performance on real recordings. The statistics of the datasets used are summarized in Table 1.

(a) Simulated datasets.
Dataset #Spk #Mixtures Overlap ratio (%)
Train
     Sim1spk 1 100,000 0.0
     Sim2spk 2 100,000 34.1
     Sim3spk 3 100,000 34.2
     Sim4spk 4 100,000 31.5
Test
     Sim1spk 1 500 0.0
     Sim2spk 2 500/500/500 34.4/27.3/19.6
     Sim3spk 3 500/500/500 34.7/27.4/19.1
     Sim4spk 4 500 32.0

(b) Real datasets.
Dataset #Spk #Mixtures Overlap ratio (%)
Train
     CALLHOME [callhome] 2 155 14.0
     CALLHOME [callhome] 3 61 19.6
     CALLHOME [callhome] 2-7 249 17.0
     DIHARD dev [ryant2019second] 1-10 192 9.8
Test
     CALLHOME [callhome] 2 148 13.1
     CALLHOME [callhome] 3 74 17.0
     CALLHOME [callhome] 2-6 250 16.7
     CSJ [maekawa2003corpus] 2 54 20.1
     DIHARD eval [ryant2019second] 1-9 194 8.9
Table 1: Dataset to train and test our diarization models.

5.2 Experimental settings

We basically followed the training protocol of the best model described in [fujita2020endtoend]111SA-EEND is available at https://github.com/hitachi-speech/EEND. We will release the source code of SA-EEND with EDA at the same repository.. We used SA-EEND with four-stacked Transformer encoders as a baseline and a backbone of our method. The inputs for the SA-EEND were 345-dimensional log-scaled Mel-filterbank based features, which were also the same as those used in the original paper. For our method, we extracted a sequence of 256-dimensional embeddings after the last layer normalization [ba2016layer] of the SA-EEND, and fed them into the EDA to calculate attractors. The threshold in Equation 8 to determine whether or not the attractor existed was set to 0.5. As we explained in subsection 4.1, we used two types of input order for the EDA: chronological order and shuffled order. Unless otherwise noted, we used the same type of order in the training and inference phases.

In this paper, we evaluated our method under the following two conditions: a fixed number of speakers and a flexible number of speakers. For the fixed number of speakers, we first trained our model using Sim2spk with or Sim3spk with

for 100 epochs. We used the Adam optimizer

[kingma2015adam] with the learning rate schedule proposed in [vaswani2017attention]

with warm-up steps of 100,000. We also finetuned those models using subsets of corresponding numbers of speakers from CALLHOME data to evaluate the performance on the real recordings. For comparison, the performance on i-vectors or x-vectors using agglomerative hierarchical clustering with probabilistic linear discriminate analysis (PLDA) scoring according to Kaldi’s pretrained model

[snyder2018xvectors] was also evaluated. In these cases, TDNN-based speech activity detection [peddinti2015jhu] and the oracle number of speakers were used for the evaluation. For experiments on the flexible speaker condition, we finetuned the 2-speaker model trained on Sim2spk on the concatenation of Sim1spk, Sim2spk, Sim3spk, and Sim4spk for 25 epochs. We finetuned the model using CALLHOME or DIHARD dev to evaluate the performance on real datasets. The x-vector-based methods based on the oracle number of speakers and the clustering threshold determined using the training set were also evaluated.

For the evaluation metric, we used the diarization error rate (DER). The

of collar tolerance was defined at the start and end of each segment for the evaluation on the simulated datasets and the CALLHOME dataset. For the DIHARD dataset, we also used the Jaccard error rate (JER), and we did not use collar tolerance, following the regulation of the second DIHARD challenge [ryant2019second].

5.3 Results on a fixed number of speakers

Sim2spk Real
Method CH CSJ
i-vector clustering 33.74 30.93 25.96 12.10 27.99
x-vector clustering 28.77 24.46 19.78 11.53 22.96
BLSTM-EEND [fujita2019end1] 12.28 14.36 19.69 26.03 39.33
SA-EEND [fujita2019end2] 4.56 4.50 3.85 9.54 20.48
SA-EEND + EDA (Chronol.) 3.07 2.74 3.04 8.24 18.89
SA-EEND + EDA (Shuffled) 2.69 2.44 2.60 8.07 16.27
Table 2: DERs (%) on 2-speaker datasets.
Sim3spk Real
Method CH
x-vector clustering 31.78 26.06 19.55 19.01
SA-EEND 8.69 7.64 6.92 14.00
SA-EEND + EDA (Chronol.) 13.02 11.65 10.41 15.86
SA-EEND + EDA (Shuffled) 8.38 7.06 6.21 13.92
Table 3: DERs (%) on 3-speaker datasets.
Use whole sequence Subsample Use the last
Method Chronol. Shuffled
SA-EEND + EDA (Chronol.) 3.07 30.04 3.54 7.32 14.48 21.13 27.18 3.67 4.97 5.40 6.11 7.68
SA-EEND + EDA (Shuffled) 2.69 2.69 2.70 2.68 2.79 3.09 5.08 3.36 5.92 7.46 8.59 10.65
Table 4: DERs on Sim2spk () using various types of sequences.

First, we evaluated our method on the 2-speaker condition like the one in [fujita2019end1, fujita2019end2]. The results are shown in Table 2. The best DERs were attained using EDA trained on shuffled embeddings. When the model was trained using embeddings in chronological order, the DERs slightly degraded. We also show the results on the 3-speaker condition in Table 3. Our method with shuffled embeddings achieved better DERs compared with the conventional x-vector clustering and vanilla SA-EEND.

Effect of the input order: To better understand the EDA, we evaluated the diarization performance on both chronologically-ordered sequences and shuffled sequences. We also tried to reduce the length of sequences by subsampling embeddings or using the last of the sequences. The results on Sim2spk () are shown in Table 3. When the EDA was trained on chronologically-ordered embeddings, it worked better on chronologically-ordered embeddings but degraded shuffled embeddings. If the embeddings were subsampled, the performance degradation was also severe even if the samples were ordered chronologically, while using the last could suppress the performance degradation. These results were that the model captured speech length tendency to output attractors. However, when the EDA was trained on shuffled embeddings, the model was not affected very much by the order and subsampling. These results show that the EDA could capture the overall sequence successfully.

Visualization:

Figure 2: Visualization of embeddings and attractors on 2-speaker mixtures in Sim2spk ().

In Figure 2, we visualized embeddings and attractors of 2-speaker mixtures by applying PCA to reduce their dimensionality. The embeddings of two speakers were well separated from the silent region, and those of overlapping regions were distributed between two clusters. Attractors were successfully calculated for each of the two speakers.

5.4 Results on a flexible number of speakers

We also evaluated our method on a condition involving a flexible number of speakers. In this case, the order of the embeddings was always shuffled. The model was first finetuned from the weights trained on Sim2spk and evaluated on simulated mixtures of a flexible number of speakers. The results are shown in Table 5. Our method achieved better DERs than the x-vector clustering-based method. It achieved and DERs on two- and three-speaker mixtures, which showed only 1.64 and 0.56 point degradation from two- or three-speaker specific models, respectively. Furthermore, our method further improved performance when the actual number of speakers was given, while x-vector clustering worsened performance in most cases using the oracle number of speakers.

Sim1spk Sim2spk Sim3spk Sim4spk
Method
x-vector clustering
     Threshold 37.42 7.74 11.46 22.45
     Oracle #Spk 1.67 28.77 31.78 35.76
SA-EEND + EDA
     Estimated #Spk 0.39 4.33 8.94 13.76
     Oracle #Spk 0.16 4.26 8.63 13.31
Table 5: DERs (%) on simulated mixtures of a flexible number of speakers.

We also evaluated our method with real conversations using the CALLHOME. In this case, the model was finetuned again using the CALLHOME training set and evaluated on the test set. The results are shown in Table 6. Our method achieved a DER, which outperformed the clustering-based method. However, it did not perform well when the number of speakers was higher than four. This is because the CALLHOME contains only ten recordings that include more than four speakers.

#Spk
Method 2 3 4 5 6 All
x-vector clustering
     Threshold 15.45 18.01 22.68 31.40 34.27 19.43
     Oracle #Spk 8.93 19.01 24.48 32.14 34.95 18.98
SA-EEND + EDA
     Estimated #Spk 8.50 13.24 21.46 33.16 40.29 15.29
     Oracle #Spk 8.35 13.20 21.71 33.00 41.07 15.43
Table 6: DERs (%) on CALLHOME of a flexible number of speakers.
Method DER JER
DIHARD-2 baseline [sell2018diarization] 40.86 66.60
Best pre-is2019-deadline [novoselov2019speaker] 35.10 57.11
Best post-is2019-deadline [landini2020but] 27.11 49.07
SA-EEND + EDA (Estimated #Speakers) 32.59 55.99
Table 7: DERs and JERs (%) on DIHARD eval.

Finally, we evaluated our method on the DIHARD dataset. The evaluation follows the DIHARD 2019 track 2, where speech activity detection has to be conducted from single channel audio. Because utilizing a high number of speakers with PIT is difficult, our system was only trained to output the most dominant seven speakers even if the input contained more than seven speakers. The results are shown in Table 7. Our SA-EEND with EDA achieved a DER of , which outperformed the baseline [sell2018diarization] and the best pre-is2019-deadline system by the DI-IT team [novoselov2019speaker], but it could not beat the best post-is2019-deadline system by the BUT team [landini2020but]. We note that our system is based on audio, while others use audio with additional training data from VoxCeleb datasets [nagrani2020voxceleb]. Evaluations on high-resolution audio with additional data are left as future work.

6 Conclusions

In this paper, we proposed EDA to calculate attractors from a sequence of embeddings, and we applied it to SA-EEND to implement end-to-end speaker diarization for speech mixtures of a flexible number of speakers. Our method achieved state-of-the-art DERs on conditions including both a fixed number of and a flexible number of speakers.

References