Speaker verification (SV) is the process of verifying, based on a set of reference enrollment utterances, whether an verification utterance belongs to a known speaker. One subtask of SV is global password text-dependent speaker verification (TD-SV), which refers to the set of problems for which the transcripts of reference enrollment and verification utterances are constrained to a specific phrase. In this study, we focus on “OK Google” and “Hey Google” global passwords, as they relate to the Voice Match feature of Google Home [1, 2].
have been the dominating paradigm of SV in previous years. More recently, with the rising of deep learning
in various machine learning applications, more efforts have been focusing on using neural networks for speaker verification. Currently, the most promising approaches are end-to-end integrated architectures that simulate the enrollment-verification two-stage process during training.
For example, in  the authors propose architectures that resemble the components of an i-vector + PLDA system. Such architecture allowed to bootstrap the network parameters from pretrained i-vector and PLDA models for a better performance. However, such initialization stage also constrained the type of network architectures that could be used — only Deep Neural Networks (DNN) can be initialized from classical i-vector and PLDA models. In 
, we have shown that Long Short-Term Memory (LSTM) networks can achieve better performance than DNNs for integrated end-to-end architectures in TD-SV scenarios.
However, one challenge in our architecture introduced in  is that, silence and background noise are not being well captured. Though our speaker verification runs on a short 800ms window that is segmented by the keyword detector [9, 10], the phonemes are usually surrounded by frames of silence and background noise. Ideally, the speaker embedding should be built only using the frames corresponding to phonemes. Thus, we propose to use an attention layer [11, 12, 13] as a soft mechanism to emphasize the most relevant elements of the input sequence.
This paper is organized as follows. In Sec. 2, we first briefly review our LSTM-based d-vector baseline approach trained with the end-to-end architecture . In Sec. 3, we introduce how we add the attention mechanism to our baseline architecture, covering different scoring functions, layer variants, and weights pooling methods. In Sec. 4 we setup experiments to compare attention-based models against our baseline model, and present the EER results on our testing set. Conclusions are made in Sec. 5.
2 Baseline Architecture
Our end-to-end training architecture  is described in Fig. 1. For each training step, a tuple of one evaluation utterance and enrollment utterances (for ) is fed into our LSTM network: , where represents the features (log-mel-filterbank energies) from a fixed-length segment, and represent the speakers of the utterances, and may or may not equal . The tuple includes a single utterance from speaker , and different utterance from speaker . We call a tuple positive if and the enrollment utterances are from the same speaker, i.e., , and negative otherwise. We generate positive and negative tuples alternatively.
For each utterance, let the output of the LSTM’s last layer at frame be a fixed dimensional vector , where . We take the last frame output as the d-vector (Fig. (a)a), and build a new tuple: . The centroid of tuple represents the voiceprint built from utterances, and is defined as follows:
The similarity is defined using the cosine similarity function:
with learnable and . The tuple-based end-to-end loss is finally defined as:
is the standard sigmoid function andequals if , otherwise equals to
. The end-to-end loss function encourages a larger value ofwhen , and a smaller value of when . Consider the update for both positive and negative tuples — this loss function is very similar to the triplet loss in FaceNet .
3 Attention-based model
3.1 Basic attention layer
In our baseline end-to-end training, we directly take the last frame output as d-vector . Alternatively, we could learn a scalar score for the LSTM output at each frame :
Then we can compute the normalized weights using these scores:
such that . And finally, as shown in Fig. (b)b, we form the d-vector as the weighted average of the LSTM outputs at all frames:
3.2 Scoring functions
By using different scoring functions in Eq. (4), we get different attention layers:
Bias-only attention, where is a scalar. Note this attention does not depend on the LSTM output .
Linear attention, where is an -dimensional vector, and is a scalar.
Shared-parameter linear attention, where the -dimensional vector and scalar are the same for all frames.
Non-linear attention, where is an matrix, and are -dimensional vectors. The dimension can be tuned on a development dataset.
Shared-parameter non-linear attention, where the same , and are used for all frames.
In all the above scoring functions, all the parameters are trainable within the end-to-end architecture .
3.3 Attention layer variants
Apart from the basic attention layer described in Sec. 3.1, here we introduce two variants: cross-layer attention, and divided-layer attention.
For cross-layer attention (Fig. (a)a), the scores and weights are not computed using the outputs of the last LSTM layer , but the outputs of an intermediate LSTM layer , e.g. the second-to-last layer:
However, the d-vector is still the weighted average of the last layer output .
For divided-layer attention (Fig. (b)b), we double the dimension of the last layer LSTM output , and equally divide its dimension into two parts: part-a , and part-b . We use part-a to build the d-vector, while using part-b to learn the scores:
3.4 Weights pooling
Another variation of the basic attention layer is that, instead of directly using the normalized weights to average LSTM outputs, we can optionally perform maxpooling on the attention weights. This additional pooling mechanism can potentially make our network more robust to temporal variations of the input signals. We have experimented with two maxpooling methods (Fig. 8):
Sliding window maxpooling: We run a sliding window on the weights, and for each window, only keep the largest value, and set other values to 0.
Global top- maxpooling: Only keep the largest values in the weights, and set all other values to 0.
4.1 Datasets and basic setup
To fairly compare different attention techniques, we use the same training and testing datasets for all our experiments.
Our training dataset is a collection of anonymized user voice queries, which is a mixture of “OK Google” and “Hey Google”. It has around 150M utterances from around 630K speakers. Our testing dataset is a manual collection consisting of 665 speakers. It’s divided into two enrollment sets and two verification sets for each of “OK Google” and “Hey Google”. Each enrollment and evaluation dataset contains respectively, an average of 4.5 and 10 evaluation utterances per speaker.
We report the speaker verification Equal Error Rate (EER) on the four combinations of enrollment set and verification set.
Our baseline model is a 3-layer LSTM, where each layer has dimension 128, with a projection layer  of dimension 64. On top of the LSTM is a linear layer of dimension 64. The acoustic parametrization consists of 40-dimensional log-mel-filterbank coefficients computed over a window of 25ms with 15ms of overlap. The same acoustic features are used for both keyword detection  and speaker verification.
The keyword spotting system isolates segments of length frames (800ms) that only contain the global password, and these segments form the tuples mentioned above. The two keywords are mixed together using the MultiReader technique introduced in .
4.2 Basic attention layer
First, we compare the baseline model with basic attention layer (Sec. 3.1) using different scoring function (Sec. 3.2). The results are shown in Table 1. As we can see, while bias-only and linear attention bring little improvement to the EER, non-linear attention111For the intermediate dimension of non-linear scoring functions, we use , such that and are square matrices. improves the performance significantly, especially with shared parameters.
|Test data||Non-attention||Basic attention|
|OK Google OK Google||0.88||0.85||0.81||0.8||0.79||0.78|
|OK Google Hey Google||2.77||2.97||2.74||2.75||2.69||2.66|
|Hey Google OK Google||2.19||2.3||2.28||2.23||2.14||2.08|
|Hey Google Hey Google||1.05||1.04||1.03||1.03||1.00||1.01|
To compare the basic attention layer with the two variants (Sec. 3.3), we use the same scoring function that performs the best in the previous experiment: the shared-parameter non-linear scoring function . From the results in Table 2, we can see that divided-layer attention performs slightly better than basic attention and cross-layer attention222In our experiments, for cross-layer attention, scores are learned from the second-to-last layer., at the cost that the dimension of last LSTM layer is doubled.
|Test data||No pooling||Sliding window||Top-|
4.4 Weights pooling
To compare different pooling methods on the attention weights as introduced in Sec. 3.4, we use the divided-layer attention with shared-parameter non-linear scoring function. For sliding window maxpooling, we experimented with different window sizes and steps, and found that a window size of 10 frames and a step of 5 frames perform the best in our evaluations. Also, for global top- maxpooling, we found that the performance is the best when . The results are shown in Table 3. We can see that sliding window maxpooling further improves the EER.
We also visualize the attention weights of a training batch for different pooling methods in Fig. 9. An interesting observation is that, when there’s no pooling, we can see a clear 4-strand or 3-strand pattern in the batch. This pattern corresponds to the “O-kay-Goo-gle” 4-phoneme or “Hey-Goo-gle” 3-phoneme structure of the keywords.
When we apply sliding window maxpooling or global top- maxpooling, the attention weights are much larger at the near-end of the utterance, which is easy to understand — the LSTM has accumulated more information at the near-end than at the beginning, thus is more confident to produce the d-vector.
In this paper, we experimented with different attention mechanisms for our keyword-based text-dependent speaker verification system . From our experimental results, the best practice is to: (1) Use a shared-parameter non-linear scoring function; (2) Use a divided-layer attention connection to the last layer output of the LSTM; and (3) Apply a sliding window maxpooling on the attention weights. After combining all these best practices, we improved the EER of our baseline LSTM model from 1.72% to 1.48%, which is a 14% relative improvement. The same attention mechanisms, especially the ones using shared-parameter scoring functions, could potentially be used to improve text-independent speaker verification models  and speaker diarization systems .
-  Yury Pinsky, “Tomato, tomahto. google home now supports multiple users,” https://www.blog.google/products/assistant/tomato-tomahto-google-home-now-supports-multiple-users, 2017.
-  Mihai Matei, “Voice match will allow google home to recognize your voice,” https://www.androidheadlines.com/2017/10/voice-match-will-allow-google-home-to-recognize-your-voice.html, 2017.
-  Najim Dehak, Patrick J Kenny, Réda Dehak, Pierre Dumouchel, and Pierre Ouellet, “Front-end factor analysis for speaker verification,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 4, pp. 788–798, 2011.
-  Daniel Garcia-Romero and Carol Y Espy-Wilson, “Analysis of i-vector length normalization in speaker recognition systems.,” in Interspeech, 2011, pp. 249–252.
-  Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
-  Johan Rohdin, Anna Silnova, Mireia Diez, Oldrich Plchot, Pavel Matejka, and Lukas Burget, “End-to-end dnn based speaker recognition inspired by i-vector and plda,” arXiv preprint arXiv:1710.02369, 2017.
-  Georg Heigold, Ignacio Moreno, Samy Bengio, and Noam Shazeer, “End-to-end text-dependent speaker verification,” in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp. 5115–5119.
-  Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
-  Guoguo Chen, Carolina Parada, and Georg Heigold, “Small-footprint keyword spotting using deep neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 4087–4091.
-  Rohit Prabhavalkar, Raziel Alvarez, Carolina Parada, Preetum Nakkiran, and Tara N Sainath, “Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 4704–4708.
-  Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, “Attention-based models for speech recognition,” in Advances in Neural Information Processing Systems, 2015, pp. 577–585.
-  Minh-Thang Luong, Hieu Pham, and Christopher D Manning, “Effective approaches to attention-based neural machine translation,” arXiv preprint arXiv:1508.04025, 2015.
-  Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio, “Show, attend and tell: Neural image caption generation with visual attention,” in International Conference on Machine Learning, 2015, pp. 2048–2057.
Florian Schroff, Dmitry Kalenichenko, and James Philbin,
“Facenet: A unified embedding for face recognition and clustering,”in , 2015, pp. 815–823.
Haşim Sak, Andrew Senior, and Françoise Beaufays,
“Long short-term memory recurrent neural network architectures for large scale acoustic modeling,”in Fifteenth Annual Conference of the International Speech Communication Association, 2014.
-  Li Wan, Quan Wang, Alan Papir, and Ignacio Lopez Moreno, “Generalized end-to-end loss for speaker verification,” arXiv preprint arXiv:1710.10467, 2017.
-  Quan Wang, Carlton Downey, Li Wan, Philip Mansfield, and Ignacio Lopez Moreno, “Speaker diarization with lstm,” arXiv preprint arXiv:1710.10468, 2017.