H-VECTORS: Utterance-level Speaker Embedding Using A Hierarchical Attention Model

10/17/2019
by   Yanpei Shi, et al.
0

In this paper, a hierarchical attention network to generate utterance-level embeddings (H-vectors) for speaker identification is proposed. Since different parts of an utterance may have different contributions to speaker identities, the use of hierarchical structure aims to learn speaker related information locally and globally. In the proposed approach, frame-level encoder and attention are applied on segments of an input utterance and generate individual segment vectors. Then, segment level attention is applied on the segment vectors to construct an utterance representation. To evaluate the effectiveness of the proposed approach, NIST SRE 2008 Part1 dataset is used for training, and two datasets, Switchboard Cellular part1 and CallHome American English Speech, are used to evaluate the quality of extracted utterance embeddings on speaker identification and verification tasks. In comparison with two baselines, X-vector, X-vector+Attention, the obtained results show that H-vectors can achieve a significantly better performance. Furthermore, the extracted utterance-level embeddings are more discriminative than the two baselines when mapped into a 2D space using t-SNE.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/15/2020

Weakly Supervised Training of Hierarchical Attention Networks for Speaker Identification

Identifying multiple speakers without knowing where a speaker's voice is...
07/12/2019

Effective Incorporation of Speaker Information in Utterance Encoding in Dialog

In dialog studies, we often encode a dialog using a hierarchical encoder...
04/06/2020

Probabilistic embeddings for speaker diarization

Speaker embeddings (x-vectors) extracted from very short segments of spe...
10/30/2019

Mixture factorized auto-encoder for unsupervised hierarchical deep factorization of speech signal

Speech signal is constituted and contributed by various informative fact...
10/29/2020

T-vectors: Weakly Supervised Speaker Identification Using Hierarchical Transformer Model

Identifying multiple speakers without knowing where a speaker's voice is...
08/20/2020

Speaker-Utterance Dual Attention for Speaker and Utterance Verification

In this paper, we study a novel technique that exploits the interaction ...
09/10/2020

Speaker Diarization Using Stereo Audio Channels: Preliminary Study on Utterance Clustering

Speaker diarization is one of the actively researched topics in audio si...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The generation of compact representation used to distinguish speakers has been an attractive topic and widely used in some related studies, such as speaker identification [17], verification [19, 14, 11], detection [13], segmentation [7, 23], and speaker dependent speech enhancement [2, 6].

To extract a general representation, Najim et al. [5]

defined a “total variability space” containing the speaker and channel variabilities simultaneously, and then extracted the speaker factors by decomposing feature space into subspace corresponding to sound factors including speaker and channel effects. With the rapid development of deep learning technologies, some architectures using deep neural networks (DNN) have been developed for general speaker representation

[22, 20]. In [22], Variani et al. introduced the -vector approach using the LSTM and averaging over the activations of the last hidden layer for all frame-level features. David et al. [20] used a five-layer DNN with taking into account a small temporal context and statistics pooling. To further improve the performance for embedding generation, attention mechanisms have been also used in some recent studies [24, 26]. Wang, et al. [24] used attentive X-vector where a self-attention layer was added before a statistic pooling layer to weight each frame.

However, there might still need an improvement on how to highlight the importance of different part of the input utterance. For this issue, a hierarchical attention mechanism is employed in this paper. This is inspired by Yang’s work [25] in document classification, where it claimed that not all parts of a document are equally relevant for answering a query and attention models were thus applied to both word and sentence level feature vectors via a hierarchical network. In the proposed approach, an utterance can be viewed as a document, and its divided segments and acoustic frames are treated as sentences and words, respectively. An attention mechanism is then used hierarchically at both frame level and segment level. The utterance embedding can be constructed by first building representations of segments from frames and then aggregating those into an utterance representation. The use of this hierarchical attention network (HAN) can offer a way to obtain a discriminative utterance-level embedding by explicitly weight target relevant features.

The rest of the paper is organized as follow: Section 2 presents the architecture of our approach. Section 3 depicts the used data, experimental setup, and the baselines to be compared. The obtained results are shown in Section 4, and a conclusion is finally drawn in Section 5.

2 Model Architecture

Figure 1: Architecture of Hierarchical Attention Network.

Figure 1 shows the architecture of hierarchical attention network. The network consists of several parts: a frame-level encoder and attention layer, a segment-level encoder and attention layer, and two fully connect layers. Given input acoustic frame vectors, the proposed model generates utterance-level representation, by which a classifier is trained to perform speaker identification. The details of each part will be in the following subsections.

2.1 Frame-Level Encoder and Attention

Assume that an utterance is divided into segments: with a fixed-length window, and each segment constains -dimensional acoustic frame vectors , .

In the frame-level encoder, a one-dimensional CNN is used and followed by a bidirectional GRU [3] in order to get information from both directions of acoustic frames and contextual information.

The output of frame-level encoder contains the summarized information of the segment centred around

In the frame-level attention layer, a two-layer MLP is first used to convert

into a hidden representation

, by which a normalised importance weight can be computed via a softmax function.

(1)
(2)

where , and are the parameters of the two-layer MLP. These parameters are shared when processing segments. A weighted sum of the output of frame-level encoder is computed by

(3)

Following [20], a statistics pooling is applied to to compute its mean vector () and std () vector over . A segment vector is then obtained by concatenating the two vectors:

(4)

2.2 Segment Level Encoder and Attention

For segment-level attention, the same steps introduced in frame-level encoder and attention are followed except a bi-directional GRU layer, as the omission of the GRU layer can well accelerate training when processing a large number of samples.

Dataset Type #Speaker Size (hour) #Utterance (1s) #Utterance (3s)
SRE08 Telephone+Interview 1336 640 3,528,326 1,176,453
CHE Telephone 120 60 252,224 84,460
SWBC Telephone 254 130 1,008,901 336,417
Table 1: Details of three telephone speech datasets: Part1 of Sre2008 (SRE08), CallHome(CHE), and Switchboard(SWBC).

The weight output of the segment-level attention layer can then be computed as follow [15]:

(5)

where , and are the parameters of a two-layer MLP used for . A vector is generated using a statistics pooling over all weighted segments:

(6)

The final speaker identity classifier is constructed using a two-layer MLP with being its input. As shown in Figure 1, the final utterance embedding is obtained after the first fully connected layer.

3 Experiment

3.1 Data

Three datasets, NIST SRE 2008 part1 (SRE08), CallHome American English Speech (CHE), and Switchboard Cellular Part 1 (SWBC), are used in this paper to train the proposed model and evaluate utterance embedding performance. SRE08 indicates the 2008 NIST speaker recognition evaluation test set [8], which contains multilingual telephone speech and English interview speech. In this work, Part1 of SRE2008, containing about 640-hour speech and 1336 distinct speakers, is selected in our experiments.

SWBC [4]

contains 130 hours telephone speech, totally 254 speakers (129 male and 125 female) under variance environment conditions (indoors, outdoors and moving vehicles). The stereo speech singles are split into two monos, and both of them are used in experiments. CHE

[1] contains 120 telephone conversations speech between native English speakers. Among all of the calls, 90 of them are placed to various locations outside North America. In this dataset, speech from the left channel is used, as the labels of speakers in the right channels is unavailable. In our experiments, SRE08 is used to train the proposed model, by which Utterance-level embeddings can be then generated using CHE and SWBC.

3.2 Experiment Setup

In this work, after removing unvoiced signals using energy based VAD [16], fixed length sliding windows (one second or thre seconds) with half-size shift is employed to divide speech streams into short segments. Each segment is viewed as an utterance independently. The total number of utterances of the three datasets are listed in Table 1. Each utterance is then split into 10 equal-length fragments without overlap. Each fragment is further segmented into frames using a 25ms sliding window with a 10ms hop. All frames are converted into 20-dimensional MFCC feature vectors. Similar to [25], to build a hierarchical structure, each utterance, fragment and frame vector obtained here are viewed as a document, sentence and word, respectively.

To evaluate the utterance-level embeddings, speaker identification and verification are conducted using the utterance-level embeddings generated on CHE and SWBC. Instead of directly processing on the embeddings, PLDA back-end [18] is applied on the embeddings to reduce the dimension to 300.

Both SWBC and CHE datasets are randomly split into training and test data with 9:1 ratio for speaker identification. For speaker verification task, in SWBC, there are 50 speakers in the enrolment set and 120 speakers in the evaluation set, with 10 utterances for each speaker. In CHE, there are 30 speakers in the enrolment set and 60 speakers in the evaluation set. Each speaker has 10 utterances.

In order to compare the proposed approach with other speaker embedding systems, two baselines are built using the methods developed in previous studies. The first baseline (”X-Vectors”) is based on a TDNN architecture [20]. It is now widely used for speaker recognition and is effective in speaker embedding extraction. The second baseline (”X-Vectors+Attention”) is made by combining a global attention mechanism with a TDNN [24, 26]. For evaluation, in our speaker identification task, correct prediction rate (prediction accuracy) is reported in this work. In the speaker verification task, equal error rate (EER) is reported. Moreover, to show the quality of the learned utterance-level embeddings, t-SNE [12] is used to visualize their distributions after being projected in a 2-dimensional space.

Level Model Input Output
Frame-Level CNN (30,20,1) (30,1,512)
Bi-GRU (30,512) (30,1024)
Attention (30,1024) (30,1024)
Statistics Pooling (30,1024) (1,2048)
Segment-Level CNN (10,2048,1) (10,1,1500)
Attention (10,1500) (10,1500)
Statistics Pooling (10,1500) (1,3000)
Utterance-Level DNN (1,3000) (1,512)
DNN (1,512) (1,512)
Table 2: Architecture of the proposed approach

Table 2 shows the configuration of the proposed architecture. It also contains batch normalisation [9] and droupout [21] layers, where the dropout rate is set to 0.2. Adam optimiser [10] is used for all experiments with , , and . The initial learning rate is .

4 Results

(a) X-vector (b) X-vector+Attention (c) H-vector
Figure 2: Embedding visualization using t-SNE. Each color represents a speaker, and each point indicates an utterance.

Table 3 shows the prediction accuracies on the test data of SRE08 using the proposed approach and two baselines. Two different utterance lengths, 1 second and 3 seconds, are used in the experiments, respectively. The use of the H-vectors shows higher accuracy when using either 1-second or 3-second input length than the two baselines. When the length of input utterances is one second, the accuracy obtained using the H-vectors can reach 94.5%, with 4.4% improvement over X-vectors and 2.4% improvement over X-vectors+Attention, respectively. When the length of input utterances is three seconds, the accuracy obtained using the H-vectors can reach 98.5%, with about 3% improvement over X-vectors and about 2% improvement over X-vectors+Attention. The proposed approach is more robust than the two baselines when processed utterances are short. In addition, the accuracies obtained using 3-second utterances are better than those using 1-second utterances. As longer utterance contains more information relevant to a target speaker than those in short ones.

Utterance Length Model Accuracy %
1 Second X-vector 90.1
X-vector+Attention 92.1
H-vector 94.5
3 Seconds X-vector 95.2
X-vector+attention 96.7
H-vector 98.5
Table 3: Identification accuracy on the test data of SRE08 when the utterance length is 1s or 3s. Number of speakers is 1336
Utterance Length Model Accuracy % EER %
1 Second X-vector 84.8 1.94
X-vector+Attention 87.5 1.61
H-vector 89.1 1.44
3 Seconds X-vector 89.4 1.46
X-vector+attention 91.0 1.21
H-vector 92.8 1.08
Table 4: Identification accuracy and Equal Error Rate (EER) on CHE dataset when the utterance length is 1s or 3s.
Utterance Length Model Accuracy % EER %
1 Second X-vector 78.2 2.23
X-vector+Attention 81.0 2.05
H-vector 83.7 1.92
3 Seconds X-vector 81.3 2.01
X-vector+attention 84.0 1.82
H-vector 86.2 1.69
Table 5: Identification accuracy and Equal Error Rate (EER) on SWBC dataset when the utterance length is 1s or 3s.

To evaluate the quality of embeddings extracted using the proposed approach, two additional datasets are employed in our experiments. Table 4 and Table 5 show the identification accuracy and verification EER when using the embeddings extracted on SWBC and CHE dataset, respectively. On the two datasets, the H-vectors consistently outperforms the two baselines whether the length of utterances is one second or three seconds.

Since the model is trained on SRE08, the identification performances on its test data are clearly better than those on the other two datasets. As the SWBC dataset contains a wide range of environment conditions (indoors, outdoors and moving vehicles), both its identification and verification performances are relatively worse than those obtained on CHE dataset.

To further test the quality of extracted utterance-level embeddings, t-SNE [12] is used to visualise the distribution of embeddings by projecting these high-dimensional vectors into a 2D space. From SWBC dataset, 10 speakers are selected and 500 one-second segment are randomly sampled for each speaker. Figure 2 (a), (b), and (c) show the distribution of selected samples of 10 speakers after using X-vectors, X-vectors+Attention, and H-vectors, respectively. Each color represents a single distinct speaker and each point represents an utterance. The black mark represents the center point of each speaker class. Figure 2(a) shows the distribution of the embeddings obtained by X-vectors. It is clear that, in this figure, some samples from different speakers are not well discriminated as there are overlaps between speaker classes. Due to the use of an attention mechanism in X-vectors+Attention, figure 2(b) shows a better sample distribution than figure 2(a). However, some samples of a speaker labelled by blue colour are not well clustered. In figure 2(c), the embedding obtained by H-vectors performs better separation property than the other two baselines.

5 Conclusion And Future Work

In this paper, a hierarchical attention network was proposed utterance-level embedding extraction. Inspired by the hierarchical structure of a document made by words and sentences, each utterance is viewed as a document, segments and frame vectors are treated as sentences and words, respectively. The use of attention mechanisms at frame and segment levels provides a way to search for the information relevant to target locally and globally, and thus obtained better utterance level embeddings, including better performance on speaker identification and verification tasks using the extracted embeddings. Moreover, the obtained utterance-level embeddings are more discriminative than the use of X-vectors and X-vectors+Attention.

In the future work, different kinds of acoustic features such as filter-bank and Mel-spectrogram will be investigated and tested on some large datasets, such as Voxceleb1 and 2.

References

  • [1] Alexandra Canavan, David Graff, G. Z. Callhome american english speech. https://catalog.ldc.upenn.edu/LDC97S42, 2001.
  • [2] Chuang, F.-K., Wang, S.-S., Hung, J.-w., Tsao, Y., and Fang, S.-H.

    Speaker-aware deep denoising autoencoder with embedded speaker identity for speech enhancement.

    Proc. Interspeech 2019 (2019), 3173–3177.
  • [3] Chung, J., Gulcehre, C., Cho, K., and Bengio, Y.

    Empirical evaluation of gated recurrent neural networks on sequence modeling.

    In NIPS 2014 Workshop on Deep Learning, December 2014 (2014).
  • [4] David Graff, Kevin Walker, D. M. Switchboard cellular part 1 audio. https://catalog.ldc.upenn.edu/LDC2001S13, 2001.
  • [5] Dehak, N., Kenny, P. J., Dehak, R., Dumouchel, P., and Ouellet, P. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech, and Language Processing (2010).
  • [6] Gao, T., Du, J., Xu, L., Liu, C., Dai, L.-R., and Lee, C.-H. A unified speaker-dependent speech separation and enhancement system based on deep neural networks. In 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP) (2015), IEEE, pp. 687–691.
  • [7] Garcia-Romero, D., Snyder, D., Sell, G., Povey, D., and McCree, A. Speaker diarization using deep neural network embeddings. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2017), IEEE, pp. 4930–4934.
  • [8] Group, N. M. I. 2008 nist speaker recognition evaluation training set part 1. https://catalog.ldc.upenn.edu/LDC2011S05, 2011.
  • [9] Ioffe, S., and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In

    International Conference on Machine Learning

    (2015), pp. 448–456.
  • [10] Kingma, D. P., and Ba, J. L. Adam: A method for stochastic optimization.
  • [11] Le, N., and Odobez, J.-M. Robust and discriminative speaker embedding via intra-class distance variance regularization. In Interspeech (2018), pp. 2257–2261.
  • [12] Maaten, L. v. d., and Hinton, G. Visualizing data using t-sne. Journal of machine learning research 9, Nov (2008), 2579–2605.
  • [13] McLaren, M., Castan, D., Nandwana, M. K., Ferrer, L., and Yilmaz, E. How to train your speaker embeddings extractor.
  • [14] Novoselov, S., Shulipa, A., Kremnev, I., Kozlov, A., and Shchemelinin, V. On deep speaker embeddings for text-independent speaker recognition. In Proc. Odyssey 2018 The Speaker and Language Recognition Workshop (2018), pp. 378–385.
  • [15] Pan, Y., Mirheidari, B., Reuber, M., Venneri, A., Blackburn, D., and Christensen, H. Automatic hierarchical attention neural network for detecting ad. Proc. Interspeech 2019 (2019), 4105–4109.
  • [16] Pang, J. Spectrum energy based voice activity detection. In 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC) (2017), IEEE, pp. 1–5.
  • [17] Park, H., Cho, S., Park, K., Kim, N., and Park, J. Training utterance-level embedding networks for speaker identification and verification. In Interspeech (2018), pp. 3563–3567.
  • [18] Salmun, I., Opher, I., and Lapidot, I. On the use of plda i-vector scoring for clustering short segments. In Odyssey (2016), pp. 407–414.
  • [19] Snyder, D., Garcia-Romero, D., Povey, D., and Khudanpur, S. Deep neural network embeddings for text-independent speaker verification. In Interspeech (2017), pp. 999–1003.
  • [20] Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. X-vectors: Robust dnn embeddings for speaker recognition. In ICASSP (2018), IEEE.
  • [21] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research (2014).
  • [22] Variani, E., Lei, X., McDermott, E., Moreno, I. L., and Gonzalez-Dominguez, J. Deep neural networks for small footprint text-dependent speaker verification. In ICASSP (2014), IEEE.
  • [23] Wang, Q., Downey, C., Wan, L., Mansfield, P. A., and Moreno, I. L. Speaker diarization with lstm. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2018), IEEE, pp. 5239–5243.
  • [24] Wang, Q., Okabe, K., Lee, K. A., Yamamoto, H., and Koshinaka, T. Attention mechanism in speaker recognition: What does it learn in deep speaker embedding? In 2018 IEEE Spoken Language Technology Workshop (SLT) (2018), IEEE.
  • [25] Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies (2016).
  • [26] Zhu, Y., Ko, T., Snyder, D., Mak, B., and Povey, D. Self-attentive speaker embeddings for text-independent speaker verification. In Interspeech (2018).