Speaker verification (SV) is the task of verifying a person’s claimed identity based on his or her voice. An important component of a practical SV system is voice activity detection (VAD), which detects the speech regions of an utterance which are the most effective for speaker discrimination. For example, if too many non-speech segments are misclassified as speech and used in the training, then it can corrupt background models and hence significantly reduces the performance of SV systems. On the other hand, during testing, if not enough speech segments are detected, then the SV algorithms will not be able to detect the speaker. For this reason, the VAD has played a vital role in robust SV systems from traditional Gaussian Mixture Model-Universal Background Model (GMM-UBM) and i-vector systems[1, 2] to recent deep speaker embedding systems [3, 4, 5, 6].
However, SV and VAD techniques have been largely developed independently of each other. Research on the use of VAD in the SV context is surprisingly limited. Most modern SV systems still use a traditional energy-based VAD, possibly due to its simplicity [3, 4, 5, 6]. In high signal-to-noise ratio (SNR) conditions, the energy-based VAD works reasonably well, while in low SNR environments, it produces unreliable speech frames [7, 8]. To deal with this problem, several deep neural network (DNN)-based VADs [9, 10, 11, 12] have been proposed and shown to give better results in low SNRs. Therefore, it is more desirable to use DNN-based VADs instead of energy-based VADs for SV systems in real-world environments where background noise is always present and the SNR may not be high enough to apply the energy-based VAD.
In this work, we propose an algorithm, self-adaptive soft VAD, to integrate the DNN-based VAD into the deep speaker embedding based SV system. The proposed algorithm is a combination of two algorithms. The first algorithm is soft VAD, which was proposed in 
. Soft VAD generates frame-wise speech posteriors and integrates these probabilities directly into an SV system instead of making a hard decision based on a threshold, which is performed in general VAD. The frame-level features extracted from a speaker feature extractor are weighted by their corresponding speech posteriors estimated from the VAD model. Soft VAD can be combined with self-adaptive VAD and used to backpropagate the gradient of the loss of the speaker embedding network through the VAD model. In this way, the VAD can be adapted to the SV domain. Another advantage of using soft VAD is that it removes the need to determine the optimal threshold value to make a binary speech/non-speech decision.
The second algorithm for integrating VAD into the speaker verification system is self-adaptive VAD. In a general setting, VAD and SV models are trained using different datasets. Therefore, when we apply VAD for SV, the domain mismatch between training and test data can lead to a significant degradation in the performance of VAD. To reduce the domain mismatch, we propose two fine-tuning based unsupervised domain adaptation (DA) methods: speech posterior-based DA (SP-DA) and joint learning-based DA (JL-DA).
For the SP-DA method, we fine-tune a pre-trained VAD on the SV data. This fine-tuning based domain adaptation method has been applied in other tasks [13, 14, 15]. Here, the problem is that the SP-DA is a supervised method that requires VAD labels for the SV data, which is costly to obtain. Therefore, we need an unsupervised method that does not require any labeling information from the target domain. This is achieved by thresholding speech posteriors estimated from the VAD to generate “reliable” labels for each utterance. Then, the VAD is fine-tuned using labels generated by the VAD itself, and the process is repeated.
For the JL-DA method, we first integrate the VAD into the SV system through a soft VAD algorithm. As we already mentioned above, the gradient of the loss of the speaker embedding network is backpropagated through the VAD. Since the VAD process is partly guided by the loss of the speaker embedding network, the VAD would hopefully be able to produce higher posterior probabilities for frames which are more important for the SV task. The self-adaptive VAD is then conducted by combining two domain adaptation approaches.
In this paper, we first review related prior works in Section 2. Then section 3 presents our proposed method, self-adaptive soft VAD. The experimental setups and results are described in Section 4 and Section 5, respectively. We conclude this work in Section 6.
2 Prior Works
There have been a few studies that have investigated the combination of VAD and SV. These studies can be divided into two main categories: soft VAD and self-adaptive VAD. In this section, we review the two previous approaches, respectively.
2.1 Soft VAD
Soft VAD was first proposed in . The purpose of their work is to improve the robustness of speaker recognition under mismatched train/test conditions by reducing the dependence of VAD on the tuned threshold. To achieve this, they directly integrated speech posteriors into a speaker recognition system instead of using VAD (or called hard VAD). Specifically, GMM-based VAD generates frame-wise speech posteriors and employs these posteriors in order to suppress the impact of the non-speech like frames on the speaker factors. The latter is achieved by weighting each frame with its speech posterior during the calculation of Baum-Welch statistics in the i-vector framework. Soft VAD improves the generalization of speech/non-speech models to unseen conditions by removing the need to make binary speech/non-speech decisions based on a threshold. They demonstrated the benefits of soft VAD over hard VAD in severely mismatched conditions.
In [2, 3], DNN-based VAD was employed for soft VAD. Yamamoto et al.  applied the same soft VAD method for i-vector extraction. They used LSTM-based VAD instead of GMM-based VAD to produce a frame-wise speech posterior. Wang et al.  employed the same LSTM-based VAD to the deep speaker embedding network . The authors combined attention weights in attentive statistics pooling  with speech posteriors estimated from the LSTM-based VAD. The product of the attention weight and speech posterior is multiplied with the corresponding frame-level speaker feature. In this paper, we employ the same soft VAD method to integrate the DNN-based VAD into the deep speaker embedding network and refer to this approach as “attention-based soft VAD” to distinguish it from other soft VAD methods [1, 2]. More details will be discussed in Section 3.1.
2.2 Self-adaptive VAD
Kinnunen et al. 
proposed a vector quantization (VQ)-based self-adaptive VAD (VQ-VAD) for i-vector based SV system. VQ-VAD shows better performance than energy-based VAD, especially in noisy conditions. The main algorithm steps are as follows: First, they extract mel-frequency cepstral coefficients (MFCC) features from the original noisy speech signal. Next, the spectral subtraction is applied to the noisy speech for speech enhancement. The purpose of this step is to merely increase the energy contrast between speech and non-speech. Following this, the frames are sorted by their log-energy values in ascending order. A fixed percentage of the lowest and highest energy frames (for instance, 10% of all frames in each case) are assumed to correspond, respectively, to reliably-labeled non-speech and speech frames. Using k-means (k = 16) clustering, GMMs of both speech and non-speech are trained taking the MFCCs corresponding to the lowest and highest energy frame indices. Finally, all the frames are labeled using the trained models, with an additional minimum-energy constraint.
Asbai et al.  improved VQ-VAD by using maximum a posteriori (MAP) adaptation. They first create two UBMs for speech and non-speech models trained from long utterances. Then, an adaptation of UBMs to the short utterance of a speaker is performed via MAP adaptation.
3 Proposed Approach
In this section, we introduce the proposed approach, self-adaptive soft VAD, which combines attention-based soft VAD and DNN-based self-adaptive VAD. We explain each of them in the following subsections.
|global average pooling|
3.1 Attention-based soft VAD with the SV system
Before we explain the attention-based soft VAD, we first describe the deep speaker embedding system in detail. Fig. 1 illustrates the deep speaker embedding system used in this paper, which consists of three modules. The first module is the frame-level feature extractor which takes a sequence of acoustic features and outputs corresponding speaker features . In our system, ResNet  is used as a feature extractor, which has been widely used in previous studies [20, 21, 22]. The architecture is described in Table 1. Here, 40-dimensional log Mel-filterbank (Fbank) features are used as acoustic features. The 11-frame context window is appended to form the time-frequency feature maps for each frame (i.e., and ). The ResNet takes Fbank features of size and outputs 128-dimensional frame-level features.
The second block of the deep speaker embedding is a pooling layer that converts variable-length frame-level features into a fixed-dimensional vector. We apply self-attentive pooling 
that provides importance-weighted means of frame-level features, for which the importance is calculated by an attention mechanism. An attention model calculates a scalar scorefor the frame-level feature :
The normalized score is then used as the weight in the pooling layer to calculate the following weighted mean vectors:
The soft VAD proposed in  is integrated into self-attentive pooling. For the acoustic feature , the DNN-based VAD produces a frame-wise speech posterior as
where is frame index and is the context window size, respectively. For simplicity, we use the same acoustic features as in the SV system (i.e., 40-dimensional Fbank features with =5). We combine attention weights with speech posteriors estimated from the DNN-based VAD and replace the attention weight with in Eq. (3). Therefore, the weighted mean vectors are calculated as follows:
We call this approach the attention-based soft VAD and this is shown in Fig. 2 (c). Wang et al.  showed that applying the speech posterior as a weight in attentive pooling improves the performance in a deep speaker embedding system.
The third module consists of two fully-connected layers. The first fully-connected layer produces a 128-dimensional speaker embedding
. The last layer is a softmax layer, and each of its output nodes corresponds to one speaker ID. The model is trained by minimizing the cross-entropy loss over speakers in the training set. For enrollment, the speaker embedding for each enrollment speaker is stored after length normalization is applied. Finally, scoring between enrollment and test speaker embedding is performed using the cosine distance.
3.2 DNN-based self-adaptive VAD
In general, VAD and SV models are trained using different datasets. Therefore, when the VAD is used for SV, the performance of the VAD can be significantly degraded due to the domain mismatch between the source domain (VAD) and target domain (SV) data. To reduce the domain mismatch in the VAD, we propose two fine-tuning based unsupervised domain adaptation (DA) methods: speech posterior-based DA (SP-DA) and joint learning-based DA (JL-DA).
3.2.1 Speech posterior-based domain adaptation
The pseudo-code of the proposed method is given in Algorithm 1. Suppose we have a dataset of the SV domain. is the total number of utterances in . and are a set of acoustic features of the -th utterance for VAD and SV, respectively:
where and are the -th frame’s feature vector in and , respectively. is the number of frames in the -th utterance. As we already mentioned, we use the same 40-dimensional Fbank features in both tasks for simplicity. Here, we only have labels for SV and do not have labels for VAD because, in most cases, it is difficult to obtain VAD labels for SV data. is the speaker ID of the -th utterance.
Following this, we obtain a set of speech posteriors from the VAD for all the frames in the -th utterance. Each speech posterior is compared with the predefined threshold () of 0.7. If the speech posterior of a frame is larger than the threshold value, it is assumed to correspond to a reliably-labeled speech frame. On the other hand, if the non-speech posterior of a frame is larger than the threshold value, the frame is regarded as non-speech. We denote this operation by and obtain a set of features and corresponding labels for VAD. The VAD model is fine-tuned using the obtained labeled data
by minimizing the cross-entropy loss function. We call this method “speech posterior-based domain adaptation (SP-DA)”.
3.2.2 Joint learning-based domain adaptation
First, we integrate the VAD into the SV system using the attention-based soft VAD algorithm which was discussed in Section 3.1. For the -th utterance, the loss of the speaker embedding model is computed using (a fixed-length segment of 300 frames) and the corresponding label . The gradients of are backpropagated through the VAD and speaker embedding model respectively. Since the VAD process is partly guided by the loss of the speaker embedding network, the front-end would hopefully be able to produce higher posterior probabilities for frames which are more important for the subsequent SV task. We call this method “joint learning-based domain adaptation (JL-DA)”.
The self-adaptive VAD is then conducted by combining the two losses as follows:
where is the total loss of the VAD and is the loss weight for . We denote the combination of the attention-based soft VAD and the DNN-based self-adaptive VAD as the “self-adaptive soft VAD”.
4 Experimental Setups
4.1 Experimental setups for speaker verification
We perform experiments on the Korean speech and noise databases , which were collected by the playback-and-recording method that uses multi-channel microphone arrays for recording the distant speech data. Here, we only use the speech of the first channel among 32 channels. The speech data was collected at 3 m apart from the artificial mouth in an indoor room, which was furnished to simulate a living room acoustically with the reverberation time (RT60) of 0.23 s. The noise database consists of 12 types of in-door noise, which was collected using the same approach. These speech and noise databases are used for creating simulated noisy speech data reflecting various in-door acoustic conditions corrupted by room reverberation and additive noise.
The training set consists of read speech data from 290 speakers and conversational speech data from 260 speakers (550 speakers in total, containing both male and female). For each utterance, the noise is randomly selected from the 3 types of noise (air conditioner, TV, and smartphone ringtone) and added to the distant speech at randomly selected SNRs between 0 and 10 dB, resulting in 200 utterances per speaker.
The utterances of remaining 105 speakers is used for the evaluation. To simulate more realistic environments where the need for robust VAD is higher, we insert 2 seconds of silence at the beginning and end of the utterance before adding noise. The duration distribution of the test data before and after inserting silence are depicted in Fig. 3. For each utterance, the noise is randomly selected from the 3 types of noise (refrigerator, background conversation, and music) and added to the distant speech at randomly selected SNRs of 0, 5, and 10 dB, resulting in 24 utterances per speaker. For each speaker, 12 utterances are sampled as the enrollment data. Other than 12 enrolled utterances, we sample 12 utterances each from the same and different speakers. In total, we create 30K trials for testing (in the text-independent scenario).
The input acoustic features are 40-dimensional Fbank features with a frame-length of 25 ms and a frame shift of 10 ms, which are mean-normalized for each utterance. All the models are implemented with PyTorch
and optimized by stochastic gradient descent with momentum 0.9. The mini-batch size is 64, and the weight decay parameter is 0.0001. We use the same learning rate schedule as in with the initial learning rate of 0.1.
4.2 Experimental setups for VAD
For VAD, we use the same data setup as in . To construct the 35 hours training set, the clean training set of the Aurora4 database  is used. To address the class imbalance, 2 seconds of silence is inserted at the beginning and end of the utterance. The clean speech is corrupted by 100 types of noise at randomly selected SNRs of -5, 0, 5, 10, 15, 20 dB.
Although we assume that we don’t have VAD labels for the SV data, we generate VAD labels to compare the VAD performance before and after domain adaptation. We can generate VAD labels because our database has a clean speech corresponding to the noisy speech, unlike most other speaker verification databases. We apply Sohn VAD  to the clean speech corpus and the results are used as labels of the corresponding noisy corpus. This method was proved to be sufficiently reasonable to generate VAD labels in . The VAD performance is evaluated with the test data used in speaker verification experiments.
The VAD model has 2 fully-connected hidden layers of 512 units with ReLU activations. We use the Adam optimizer with a mini-batch size of 512 and the initial learning rate is . Self-adaptive soft VAD is fine-tuned by the same optimizer as in SV with the initial learning rate of .
5.1 Comparison of different VAD methods
In this section, we compare the results of SV when different VADs are applied. The equal error rates (EERs) are shown in Table 2. When we do not use VAD, SV systems using temporal average pooling (TAP) and self-attentive pooling (SAP) show EERs of 13.33% and 12.31%, respectively. We observe that energy-based hard VAD (making a hard decision based on a threshold) does not improve the performance of SV under the condition of low SNR with a long silence interval. When we apply DNN-based hard VAD, SV systems with TAP and SAP provide EERs of 11.39% and 10.83%, respectively, which are better than when VAD is not applied or energy-based VAD is applied.
In case domain adaptation (DA) is not used, we compare the hard VAD, G-soft (gating-based soft) VAD, and A-soft (attention-based soft) VAD, which are depicted in Fig. 2. For G-soft VAD, we do not use attention mechanism and only the speech posterior is multiplied to the frame-level feature in Eq. 3. Here, soft VAD performs better than hard VAD, and A-soft VAD yields a better result than G-soft VAD. From this, we can conclude that using the speech posterior and attention weight simultaneously is better than using each one individually. Note that using hard VAD does not improve performance in A-soft VAD, even yielding a higher EER of 10.63%.
We can see that the SV performance is definitely improved when the self-adaptive VAD and A-soft VAD are used together. When they are combined (i.e., self-adaptive soft VAD is used), we obtain an EER of 9.21%. As a reference, we also provide an upper bound of the performance, which is an EER of 8.27% obtained by using ground-truth VAD labels. In case only one of JL-DA and SP-DA is used, the SV system gives a worse result than when they are used together, achieving EERs of 10.75% and 10.66%, respectively. When only JL-DA is used (i.e., when SP-DA is not used), the VAD is adapted only to minimize the loss of SV rather than being adapted to achieve the original purpose of the VAD task. In this case, the roles of soft VAD and attention mechanism overlap because the attention model is also trained to minimize the loss of SV. We can avoid this problem by using the SP-DA together. Likewise, we can observe that SP-DA shows better performance by using the JL-DA together. We will discuss the reason for this in the following subsection.
|DA||Pooling||VAD type||EER (%)|
|No||TAP||Hard VAD (energy)||13.35|
|No||TAP||Hard VAD (DNN)||11.39|
|No||SAP||Hard VAD (DNN)||10.83|
|No||SAP||Hard + A-soft VAD||10.63|
|Yes||SAP||JL-DA + A-soft VAD||10.75|
|Yes||SAP||SP-DA + A-soft VAD||10.66|
|Yes||TAP||Self-adaptive + G-soft VAD||10.59|
|Yes||SAP||Self-adaptive + A-soft VAD||9.21|
|No||SAP||Ground-truth VAD labels||8.27|
5.2 Effect of the loss weight
. As evaluation metrics, we use EER and area under the ROC curve (AUC) for SV and VAD, respectively. The pre-trained VAD (before domain adaptation) gives an AUC of 91.58%.
Here, is the loss weight for (the loss of SP-DA). Therefore, if we increase , then the impact of SP-DA increases. When is 0, we only use JL-DA without SP-DA (i.e., “JL-DA + A-soft VAD” in Table 2). In this case, we see that the performance of both SV and VAD is the worst, with an EER of 10.75% and an AUC of 94.15%. As we already discussed in the previous subsection, this is because the roles of soft VAD and attention mechanism overlap. With SP-DA, the VAD can be explicitly adapted to perform its intended purpose which is to classify frames as speech or non-speech.
The AUC value is increasing with the increase of the loss weight to 1.5. We obtain the highest AUC of 97.44% at =1.5. However, it can be seen that the AUC value decreases when the value is greater than 1.5. We believe this is because the VAD model is overfitting the adaptation data and losing its ability to generalize. If the value of is extremely large, the influence of SP-DA becomes dominant. This is the case when we only use SP-DA without JL-DA (i.e., “SP-DA + A-soft VAD” in table 2). In this case, we obtain an EER of 10.66% and an AUC of 94.98%. From this point, the AUC value is increasing with the decrease of to 1.5. As a result, we can conclude that JL-DA acts as a regularizer for SP-DA.
According to the figure, EER tends to decrease with increasing AUC, with some exceptions. This is consistent with our intuition that the robustness of VAD is directly related to the performance of SV. The best result is obtained at =2. We achieve an EER of 9.21% with an AUC of 97.41%.
In this paper, we proposed the self-adaptive soft VAD to integrate the DNN-based VAD into the deep speaker embedding-based speaker verification system. To reduce the domain mismatch between the VAD and SV data, we combined two algorithms, self-adaptive VAD and soft VAD. By applying soft VAD into self-attentive pooling, we could fine-tune the VAD to improve the performance of the speaker verification system directly. Besides, we could obtain VAD labels by thresholding the speech posterior estimated from VAD and fine-tune the VAD using the obtained labeled data. On the Korean speech dataset, we found that the proposed VAD algorithm outperforms previous approaches for text-independent speaker verification in realistic noisy conditions. In the future, we will explore how to automatically decide or adapt the threshold in speech posterior-based domain adaptation.
This material is based upon work supported by the Ministry of Trade, Industry and Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10063424, Development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots).
-  M. McLaren, M. Graciarena, and Y. Lei, “Softsad: Integrated frame-based speech confidence for speaker recognition,” in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2015, pp. 4694–4698.
-  H. Yamamoto, K. Okabe, and T. Koshinaka, “Robust i-vector extraction tightly coupled with voice activity detection using deep neural networks,” in Proc. of Asia-Pacific signal and information processing association annual summit and conference (APSIPA ASC), 2017, pp. 600–604.
-  Q. Wang, K. Okabe, K. A. Lee, H. Yamamoto, and T. Koshinaka, “Attention mechanism in speaker recognition: What does it learn in deep speaker embedding?,” in Proc. of Spoken Language Technology Workshop (SLT), 2018, pp. 1052–1059.
-  K. Okabe, T. Koshinaka, and K. Shinoda, “Attentive statistics pooling for deep speaker embedding,” in Proc. of Interspeech, 2018, pp. 2252–2256.
-  H. Zeinali, L. Burget, J. Rohdin, T. Stafylakis, and J. H. Cernocky, “How to improve your speaker embeddings extractor in generic toolkits,” in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 6141–6145.
-  Y. Tang, G. Ding, J. Huang, X. He, and B. Zhou, “Deep speaker embedding learning with multi-level pooling for text-independent speaker verification,” in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 6116–6120.
-  M. Sahidullah and G. Saha, “Comparison of speech activity detection techniques for speaker recognition,” arXiv preprint arXiv:1210.0297, 2012.
-  H. B. Yu and M. W. Mak, “Comparison of voice activity detectors for interview speech in NIST speaker recognition evaluation,” in Proc. of Interspeech, 2011, pp. 2353–2356.
-  F. Bie, Z. Zhang, D. Wang, and T. Zheng, “DNN-based voice activity detection for speaker recognition,” in CSLT Tech. Rep, 2015, pp. 1–11.
-  Y. Jung, Y. Kim, H. Lim, and H. Kim, “Linear-scale filterbank for deep neural network-based voice activity detection,” in Proc. of Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Technique (O-COCOSDA), 2017, pp. 43–47.
Y. Jung, Y. Kim, Y. Choi, and H. Kim,
“Joint learning using denoising variational autoencoders for voice activity detection,”in Proc. of Interspeech, 2018, pp. 1210–1214.
Z. Fan, Z. Bai, X. Zhang, S. Rahardja, and J. Chen,
“AUC optimization for deep learning based voice activity detection,”in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 6760–6764.
-  J. Hoffman, E. Tzeng, J. Donahue, Y. Jia, K. Saenko, and T. Darrell, “One-shot adaptation of supervised deep convolutional models,” in ICLR Workshop, 2014.
M. Oquab, L. Bottou, I. Laptev, and J. Sivic,
“Learning and transferring mid-level image representations using convolutional neural networks,”in , 2014, pp. 1717–1724.
-  S. Bak, P. Carr, and J. Lalonde, “Domain adaptation through synthesis for unsupervised person re-identification,” in Proc. of European Conference on Computer Vision (ECCV), 2018, pp. 193–209.
-  D. Snyder, D. Garcia-Romero, D. Povey, and S. Khudanpur, “Deep neural network embeddings for text-independent speaker verification,” in Proc. of Interspeech, 2017, pp. 999–1003.
-  T. Kinnunen and P. Rajan, “A practical, self-adaptive voice activity detector for speaker verification with noisy telephone and microphone data,” in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp. 7229–7233.
-  N. Asbai, M. Bengherabi, A. Amrouche, and Y. Aklouf, “Improving the self-adaptive voice activity detector for speaker verification using map adaptation and asymmetric tapers,” International Journal of Speech Technology, vol. 18, pp. 195–203, June 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.
-  C. Li, X. Ma, B. Jiang, X. Li, X. Zhang, X. Liu, Y. Cao, A. Kannan, and Z. Zhu, “Deep speaker: An end-to-end neural speaker embedding system,” arXiv preprint arXiv:1705.02304, 2017.
-  W. Cai, J. Chen, and M. Li, “Exploring the encoding layer and loss function in end-to-end speaker and language recognition system,” in Proc. of Odyssey Speaker and Language Recognition Workshop, 2018, pp. 74–81.
-  Y. Jung, Y. Kim, H. Lim, Y. Choi, and H. Kim, “Spatial pyramid encoding with convex length normalization for text-independent speaker verification,” in Proc. of Interspeech, 2019, pp. 4030–4034.
-  Y. Suh, Y. Kim, H. Lim, J. Goo, Y. Jung, Y. Choi, H. Kim, D. Choi, and Y. Lee, “Development of distant multi-channel speech and noise databases for speech recognition by in-door conversational robots,” in Proc. of Conference of The Oriental Chapter of International Committee for Coordination and Standardization of Speech Databases and Assessment Technique (O-COCOSDA), 2017, pp. 5–8.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in pytorch,” in Advances in Neural Information Processing Systems (NIPS) Autodiff Workshop, 2017.
-  D. Pearce and J. Picone, “Aurora working group: DSR front end LVCSR evaluation AU/384/02,” Institute for Signal and Information Process, Mississippi State University, Technical Roport, 2002.
-  J. Sohn, N. S. Kim, and W. Sung, “A statistical model-based voice activity detection,” IEEE Signal Processing Letters, vol. 6, no. 1, pp. 1–3, 1999.
-  X. L. Zhang and D. Wang, “Boosting contextual information for deep neural network based voice activity detection,” IEEE/ACM Transactions on Audio Speech and Language Processing, vol. 24, no. 2, pp. 252–264, 2016.
-  J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology, vol. 143, pp. 29–36, 1982.