Supervised deep learning based speech enhancement made significant progress in the last decade. Notable works include masking  and mapping  based approach, Speech Enhancement Generative Adversarial Network (SEGAN) , Deep Feature Loss (DFL) , end-to-end metric optimization , and Transformer based approach [6, 7]. Meanwhile, an active research exists in the robustness of Speaker Verification (SV) systems [8, 9, 10, 11]. Another reason for interest in speech enhancement arises from the notion that it is considered as a modern solution to improve noise robustness in SV systems [10, 12, 13]. Such studies demonstrate that an explicit speech enhancement processing is beneficial to the state-of-the-art (SOTA) conventional x-vector and Probabilistic Linear Discriminant Analysis (PLDA) based SV system . We refer to this methodology as task-specific enhancement. Prior work revealed its benefit for other tasks like Speaker Diarization , Language Recognition , and Automatic Speech Recognition (ASR) .
Building on perceptual loss ,  proposed to learn speech enhancement using a pre-trained auxiliary network to obtain (deep feature) loss (Section 2). Authors observed that the usual supervised training with time-domain loss gives poor enhancement performance on low Signal-to-Noise Ratio (SNR) test signals, as confirmed with speech enhancement metrics like Perceptual Evaluation of Speech Quality (PESQ) and Signal-to-Distortion Ratio (SDR). Therefore, they suggested to instead minimize the deviation of auxiliary network activations of enhanced and (reference) clean signals. Here, enhanced signals refer to the output of enhancement network (Figure 1).
Recently,  proposed a test-time feature denoising approach based on  and reported large gains over the SOTA data augmented x-vector based SV system. Since the conventional x-vector system can tackle clean signals such as in the Speakers In The Wild (SITW) dataset [14, 19], authors chose DFL technique for its potential to handle low SNR signals. Due to their primary focus on final SV performance, they chose the auxiliary network as speaker classification/embedding network. Such enhancement preserves speaker information. They reported results on a single-channel wide-band (16 KHz) dataset called BabyTrain, which consists of daylong recordings of children speech in noisy and reverberant environments . The main contribution of this study is to explore in-depth various facets of DFL, ask some novel analysis-oriented questions, and present evaluation on real data (BabyTrain). We now describe the significance of all experiment sections.
Section 5.1 reproduces the gains observed with the DFL based enhancement, as done in . Furthermore, it judges the utility of activations from the deeper and, especially, the last layer (i.e. speaker embedding layer) of the auxiliary network. Motivation for this comes from the common knowledge that a convolutional network contains high level information such as speaker identity primarily in the initial layers .  used only first few layers and our preliminary experiments on their setup revealed degradation by incorporating deeper layer activations. However, their data setting was small (on VCTK corpus ) and a much larger data setting such as ours is better suited to investigate this.
Section 5.2 investigates the choice of training data for enhancement and auxiliary network. For training enhancement network, it is imperative to have a clean, large, and naturalistic corpus. For this,  chose a (high) SNR-filtered version of VoxCeleb [23, 24]. In DFL training, activations of noisy signals come from auxiliary network (Equation 1). Hence, it remains an open question if a stronger auxiliary network i.e. one trained with (noisy) data augmentations is superior. Training data choice is important to us because we focus on BabyTrain and large “in the wild” public data releases such as SITW , VoxCeleb , and CN-Celeb  do not explicitly account for children speech.
Section 5.3 asks whether it is beneficial to use higher dimensional features in the enhancement network. For uniformity, we start with same features (40-dimensional log Mel-filterbank (LMFB)) for the enhancement, auxiliary, and x-vector network. Then, we quantify the effect of increasing feature dimension for the former network while keeping it fixed for the others. This idea of using different features for different networks is promising because most feature-domain enhancement studies work with spectrogram features. They have higher dimension than the standard 40-D LMFB features  and we experiment with them too.
Section 5.4 explores whether enhancement of PLDA and x-vector network data brings improvement on top of simple test set enhancement scheme. Enhancement of data other than test sets can, potentially, counter the distortion introduced by enhancement and reduce the mismatch among test, PLDA, and x-vector network data. This is a notable problem in speech enhancement [17, 25, 26]. Note that enhancing x-vector data means training x-vector network with enhanced features.
Section 5.5 considers a different viewpoint to Section 5.4 and asks whether enhanced signals contain useful/complementary information to original signals. We investigate this by including both enhanced and original signals in PLDA and x-vector data. Such analysis should provide insight into the nature of enhanced signals. It is worthwhile to do as our enhancement setup is in (filterbank) feature domain and it is implausible to calculate time-domain metrics like SDR and PESQ for analysis.
Section 5.6 tests the effectiveness of enhancement when a noise class is missing from data augmentation of x-vector network. While designing a generic x-vector based SV system, it is a common practice to mix clean data with several noise classes such as music, babble, and general environmental noises. We use this particular notion of data augmentation in this study. This may be not be optimal for the deployed environment and even cause performance degradation. Thus, enhancement as a solution to robustness of SV is attractive - provided the enhancer has good generalization property. This section quantifies this generalization. In this “leave-one-out” analysis, we, separately, consider the cases when enhancement has or has not seen the missing class. This analysis is akin to finding harmful and/or superfluous noise class during data augmentation and, thereby, similar in motivation to ablation and pruning work in deep learning [27, 28].
Section 5.7 addresses an important extension to : effectiveness of DFL enhancement for dereverberation for SV. Weighted Prediction Error (WPE)  is widely regarded as SOTA dereverberation technique. Recently, a Generative Adversarial Network (GAN) based domain-adaptation work outperformed it in a large scale setting [30, 31]. We design several dereverberation schemes based on DFL. Several of such schemes combine denoising since dereverberation (alone) may be ineffective for final performance gains.
2 Deep Feature Loss
Perceptual loss or Deep Feature Loss [18, 4] refers to the extraction of loss from a pre-trained auxiliary network by comparing its activations for enhanced and reference clean signal. To obtain this, we manually pre-select few hidden layers of the auxiliary network. Main idea is to enhance while retaining high level properties of signal. This property depends on the choice of the auxiliary task. With a speaker embedding/classification network (in our case), enhancement preserves speaker information. Mathematically, DFL using hidden layers of auxiliary network is:
Here, and refers to noisy and clean feature matrices of size , is the feature dimension, is number of frames, is the number of hidden layers considered for DFL computation, is the index for such layers, is the auxiliary network, is the enhancement network. A corresponding visual description is in Figure 1. The maximum value of is . They refer to 5 equidistant hidden layers preselected in our auxiliary network. We handle final layer activations exclusively by the loss denoted by . refers to the usual feature loss i.e. without using auxiliary network. Importantly, we do not use x-vector network itself for extracting DFL because it may be not be optimal as noted in Section 5.2.
3 Neural Networks Architectures
3.1 Enhancement network
We choose Convolutional Neural Network (CNN) based Context Aggregation Network (CAN) from  except with higher number of channels (90). It is inspired by CAN in . Its main features are linearly increasing dilations (1 to 8), eight convolution layers, Adaptive Batch Normalization (BN), LeakyReLU activations, and three Temporal Squeeze Excitation (TSE) 
connections along with residual connections.
Final layer linearly maps the output to input dimension and a subsequent logarithm operation predicts the Time-Frequency (TF) mask . To mimic Signal Approximation (SA) loss , we add this log-domain mask to the original input (multiplication in linear domain) to predict the final enhanced features. We found this global skip connection significantly helpful in our preliminary experiments. The network has a context length of 73 frames and 10.2M number of parameters. Since the main feature of CAN is high context, we tried increasing its receptive field but observed degradation in our preliminary experiments.
3.2 Auxiliary network
The auxiliary network used in this work is the 16KHz version ResNet-34 network described in [32, 33, 14]. We select this network due to its good performance on SV . It is a 2D CNN based ResNet-34 residual network  with Learnable Dictionary Encoding (LDE) pooling 
and Angular Softmax loss function[36, 37]. The dictionary size of LDE is 64 and the network has 5.9M parameters.
3.3 x-vector network
We choose Extended TDNN (E-TDNN) architecture introduced in . E-TDNN greatly improves upon Time-Delay Neural Network (TDNN) by interleaving dense layers with convolution layers and employing a (slightly) wider temporal context. Total trainable parameters are 10M. A summary of its exact specification is in .  prefers a larger Factorized TDNN (F-TDNN) network due to its superior performance than E-TDNN. Since several of our experiments require re-training of the x-vector network, we choose E-TDNN to facilitate faster experimentation. Note that E-TDNN gives competitive performance  and, therefore, is suitable for our analysis-oriented work.
4 Experimental Setup
4.1 Dataset details
We combine VoxCeleb1 and VoxCeleb2 [39, 23, 40] to create voxceleb. We, then, concatenate utterances from the same video to create voxcelebcat (or vc). This gives us 2710 hrs of relatively clean audio with 7185 speakers. voxcelebcat_div2 (or vc_div2) refers to a random 50% subset of voxcelebcat
. We use a SNR estimation algorithm calledWaveform Amplitude Distribution Analysis (WADA-SNR)  to retain top 50% clean samples from voxcelebcat to create voxcelebcat_wadasnr (or vc.w). This is 1665 hrs of audio with 7104 speakers.
To create noisy counterpart, we use noise utterances from MUSAN  and DEMAND  corpora. We make the reverberant counterpart using impulse responses of small and medium size rooms from the Aachen Impulse Response (AIR) database. A 90-10 split gives us the training and validation lists for the enhancement system. Lastly, we use MUSAN and DEMAND to corrupt voxcelebcat to form voxcelebcat_combined (or vcc). Its size is three times as that of voxcelebcat. “libri” refers to LibriSpeech dataset . Unless specified otherwise, we train the auxiliary network and x-vector network with voxcelebcat_wadasnr and voxcelebcat_combined respectively.
For evaluation on real data, we choose BabyTrain corpus which is based on the Homebank repository . It consists of day-long children speech in uncontrolled noisy and reverberant environments. Recordings are in the presence of several (dynamic) number of background speakers. Training data for diarization and detection (adaptation data) has duration of 130 and 120 hrs respectively. Enrollment and test data are 95 and 30 hrs respectively. The classification of enrollment and test utterances is as follows. test>= and enroll= refers to test and enrollment utterances of minimum and equal to seconds from the speaker of interest respectively with and . For enrollment utterances, time marks of the target speaker are present but not for the test utterances. There may be multiple speakers present in the test utterances. Scripts for above described data preparations are available online111https://github.com/jsalt2019-diadet.
4.2 Training details
We train CAN
with batch size of 32, learning rate of 0.001 (exponentially decreasing), 6 epochs, Adam optimizer, and 500 frames (5s audio). Code for CAN is available online as “DFL_TSEResCAN2d_SmallContext_LogSigMask_BNIn”222https://github.com/jsalt2019-diadet/jsalt2019-diadet/blob/master/egs/sitw_noisy/v1.pyfb/steps_pyfe/enh_models/models.py. Unless otherwise stated, input features are un-normalized 40-D LMFB features. We train the auxiliary network with batch size of 128, number of epochs as 50, optimizer as Adam , learning rate of 0.0075 (exponentially decreasing) with warmup , and sequences of 800 frames (8s audio). Since this network is a CNN, we use mean-normalized LMFB features which have spatial information contrary to Mel-Frequency Cepstrum Coefficient (MFCC) features. To account for this normalization mismatch with the enhancement network, we insert an online mean normalization between them during DFL training. For E-TDNN training, we use Kaldi  scripts with 40-D MFCC features which have silence removed and are mean-normalized.
4.3 Evaluation details
The PLDA-based back-end consists of a 200-D Linear Discriminant Analysis (LDA) with generative Gaussian SPLDA . Additionally, we use a diarization system since BabyTrain consists of babble noise (background speakers). For this, we followed the Kaldi x-vector Callhome diarization recipe . Details are in the JHU-CLSP diarization system as described in . Note that, in general, “enhancement of test set” refers to enhancing test, enroll, and adaptation data. For the final evaluation, we use standard metrics like Equal Error Rate (EER) and minimum Detection Cost Function (minDCF) at target prior (NIST SRE18 VAST operating point ). Except Kaldi based x-vector training, we develop all framework using Hyperion library333https://github.com/jsalt2019-diadet/hyperion
5.1 Baseline results
In Table 1, we reproduce the claims of . Last column refers to the mean metric value per row. We organize results for EER and minDCF separately. Boldface result signify the best value achieved per column per metric. Note that x-vector network is trained with augmentation in all cases and enhancement is applied on adaptation data, enrollment, and test utterances. That is, we use the default test-time enhancement scheme as mentioned in Section 4.3.
“no-enh” refers to the case when enhancement is not used in the SV pipeline. refers to the results obtained with DFL using all intermediate hidden layers of the auxiliary network. We note relative improvement of 12.3% and 12.5% for EER and minDCF respectively w.r.t. “no-enh”. Feature loss leads to lesser gains contrary to degradation caused in . This variation is perhaps due to use of a different x-vector network in this work. Combining it with DFL gives better results. We note that adding auxiliary network speaker embedding layer loss does not lead to improvement. This suggests that all hidden activations from auxiliary network need not be useful for final performance. Using lesser number of layers in DFL does not lead to consistent observation. Nevertheless, gives best performance for both metrics and it serves as the baseline for this work. These baseline results are present in all results tables under different names but all denoted by (*).
Importantly, note that results under “test>=0s” represent final average performance on BabyTrain. “mean” refers to the weighted mean performance with higher weight for longer test trials. In practice, it is uncommon to have very small test utterances. Therefore, for this practical significance, we consider “mean” for final model comparisons in this work. For simplicity in reading all tables, reader may focus on “mean” performance.
5.2 Choice of training data for enhancement and auxiliary network
presents the results obtained with different choice of training data for enhancement and auxiliary network. Here, training data for enhancement network refers to the clean data counterpart required for creating training pairs for supervised learning. A preliminaryWADA-SNR analysis of VoxCeleb (“vc”) revealed the presence of several low SNR signals. For this reason, we use SNR estimation to retain top 50% clean utterances from “vc” to form “vc.w”. The second column of Table 2 specifies the training data for enhancement and auxiliary network (separated by “-”) respectively.
We make few prominent observations. First, by comparing enhancers trained with “vc” and “vc.w” as enhancement network training data, we find using full VoxCeleb (“vc”) harmful for both metrics. This suggests “vc” may not be clean enough for training enhancer and some filtering may be necessary. Second, using “vc_div2” in place of “vc.w” degrades EER, which suggests a SNR-based filtering is better than random subsampling. Third, to test the hypothesis that a cleaner data (LibriSpeech) helps further, we find that it gives worst performance. This establishes the superiority of VoxCeleb, perhaps, due to its diverse and spontaneous conversation nature, which is contrary to the read speech nature of LibriSpeech. Fourth, in our DFL formulation, we obtain activations of noisy samples from the auxiliary network (Equation 1). We do not observe gains by using a stronger auxiliary network (trained with “vc” or “vcc”). This is contrary to the popular notion that even clean test files benefit from data augmentation . This indicates that using x-vector network for deep feature loss extraction may not be optimal, as hinted in Section 2. To sum up, we obtain best results with SNR-filtered VoxCeleb for both networks (“vc.w-vc.w”).
5.3 Enhancement with mismatch between enhancement and x-vector/aux. network acoustic features
Table 3 presents the results by varying the feature used in the enhancement network. Result rows specify the feature dimension against the name of the feature. Features (40-D LMFB
) for the auxiliary and x-vector network remain unchanged. A trainable linear layer bridges enhancement and auxiliary network to handle the mismatch of the feature dimensions for these networks. We note that all higher dimensional feature models result in similar level of degradation except for spectrogram which leads to lesser degradation. As an additional evidence, we observed higher variance in the training and validation losses for these networks. This degradation is perhaps because learning with higher dimensional features require more data. A fair comparison study should, correspondingly, vary the training data amounts but we do not investigate that. Another option to avoid degradation could be to use same higher-dimensional features for all three networks. However, that leads to increased training complexity and, possibly, worse performance as apparent by the popularity of low-dimensional features like 40-DLMFB in SOTA SV systems .
5.4 Effect of enhancing PLDA and/or x-vector data on top of test set enhancement
Table 4 presents the results for systems with enhancement of PLDA and/or x-vector train data (train) on top of test, enroll, adaptation data) enhancement (test). First column lists the datasets that undergo enhancement processing. We find enhancing PLDA data (slightly) harmful. Enhancing x-vector data gives best minDCF, while enhancing x-vector and PLDA data gives worst performance, even worse than the case of no enhancement. This suggests that PLDA is susceptible to enhancement processing. This finding is contrary to the notion that enhancement of all datasets solve the mismatch problem .
5.5 Augmentation with enhanced features
Table 5 presents results for this experiment. Here, “test (*)” and “PLDA,test” (from Table 4) represent enhancement of test set and test set along with PLDA data respectively. To gain insight into the nature of enhanced signals, we investigate if they contain complementary information to original signals. “aug-in-PLDA” refers to including enhanced signals with original (non-enhanced) in PLDA data. In Section 5.4, we noted that training PLDA with enhanced data gives worse performance compared to training with original data. Here, combining them causes further degradation.
The next experiment is “aug-in-train”, which refers to training x-vector data with original as well as enhanced data. This doubles the training data and time but, nevertheless, counts for a fair investigation since we train all x-vector networks till convergence and don’t introduce any new data here. Note that we assign same speaker label to enhanced signal as the original. Doing this bring huge (relative) improvements of ~40% in both metrics. This strongly establishes our hypothesis that enhanced signals contain useful complementary information. This is a novel finding albeit computationally expensive. “aug-in-train,PLDA” is an extension of “aug-in-train”. It refers to inclusion of enhanced and original signals in x-vector as well as PLDA data. This leads to some degradation with respect to “aug-in-train”. Thus, it is our consistent observation that PLDA is susceptible to enhancement processing and it is best trained with unenhanced data. It is useful to reiterate that in our enhancement schemes, test set is always enhanced.
5.6 Leave-one-out noise class in x-vector data
Table 6 summarizes the findings for this experiment. Like previously, “no-enh” and “test-enh (*)” serve as reference results. In our case, we have four, namely, noise, music, babble, chime3bg. In simulated data settings, usually, introduction of new noise classes in x-vector data leads to performance gains. However, these augmentations can be harmful for real data, as established by the result rows which contain noise class name in first column. They represent four SV systems with x-vector data missing one noise class. These results don’t include enhancement and, thus, are comparable with “no-enh” system which has seen all noise classes. We find omitting music class in x-vector data gives best performance on BabyTrain. Similarly, omitting babble and chime3bg lead to performance better than “no-enh”. Speculating noise class which can hurt final performance is impossible a priori. Therefore, speech enhancement is an appealing solution for improving robustness.
For all four SV systems, we report the benefit of using our enhancement scheme. “enh-seen” and “enh-unseen” refer to cases when enhancement network training has or has not seen the noise class respectively. Numbers in underline refer to best performance per SV system. Enhancement helped all four systems individually. As expected, the enhancement system which has seen the missing noise class achieves the best performance (expect for noise). Importantly, this shows that enhancement helps even when a noise class is missing from x-vector training, regardless of whether it has seen that noise class itself or not. However, “test-enh (*)” is worse than the best performance achieved in this ablation experiment, which reveals that current enhancement scheme is not strong enough to counter the degradation caused by harmful data augmentations. This also highlights the scope in the improvement of the enhancement scheme. Lastly, we note that omitting noise (general environmental noises) brings degradation, suggesting the importance of complex environmental noises in training. Thus, incorporating noise files from Voices2019 , DCASE Challenge444http://dcase.community/, and AudioSet  can be useful in our framework.
5.7 Handling reverberations
It is unclear if the DFL based supervised enhancement scheme can work for the dereverberation task. It is also unclear how much scope for dereverberation is in BabyTrain. In Table 7, we present results for several dereverberation schemes, some combined with denoising. “WPE” refers to Weighted-Prediction Error algorithm based pre-processing. It gives minor improvement over “no-enh”. This suggests that dereverberation is either very challenging or has less scope in BabyTrain in the first place. “dereverb” refers to DFL system trained for only dereverberation, which gives worse performance than “WPE” suggesting DFL scheme doesn’t work for dereverberation out-of-the-box. “WPE->denoise” is the denoising system but with WPE pre-processing. It is minimally better than “denoise”. However, it is largely better than “dereverb->denoise”, which refers to use of two DFL systems trained (separately) for the two tasks respectively. “denoise->dereverb” (flipped version of “dereverb->denoise”) does not lead to significant difference.
We now describe the joint training schemes. “joint1stage” refers to DFL system trained for denoising and dereverberation (jointly) in one go. Training pairs for it contain examples for denoising, dereverberation, and both. Note that it is worse than “WPE->denoise” suggesting doing these two tasks in one-go is hard. “joint2stage” is an assisted modification of “joint1stage”. In addition to accepting reverberant and noisy signal input, it accepts another reverberant signal in the middle of the network and tries to minimise its deep feature loss as well. This forces the network to first do denoising mimicking the standard signal model in signal processing. This assisted scheme did not work, further solidifying our presumption that combining the tasks of denoising and dereverberation is very challenging. Since our denoising network has seen few reverberant samples (from chime3bg), we tried a double (disjoint) denoising scheme (“denoise->denoise”) and find it brings minimal improvement. Results in this section suggest, finally, that the current DFL scheme does not work for dereverberation and we suspect this problem is better solved through domain-adaptation methodology, as shown recently in [30, 31].
Incorporating robustness in Speaker Verification is a challenging problem. Data augmentation is a common practice which tries to handle it. BabyTrain is an appropriate dataset for this study due to its uncontrolled nature and emphasis on children’s speech verification. Since large data releases do not explicitly account for children speech, generalization of SV systems to lower age group is an open question. This work evaluates that, although inherently. Deep feature loss is a promising methodology which, in its current form, works along with data augmentation in x-vector network. It is shown to bring vast improvements. Our experiments reveal that this test-time feature denoising approach is optimal when it utilizes all hidden activations of auxiliary network excluding the final layer activations. Search for best training data combination for enhancement and auxiliary network reveals it is optimal to use top 50% utterances of VoxCeleb according to their SNR. This satisfies the ideal requirements of clean, large, and naturalistic nature of data for training enhancement. Experiments using different features for enhancement network shows it is best to use same 40-D LMFB features as in the auxiliary and x-vector network.
An important inquiry into enhancing data other than test set reveals it is beneficial for the front-end (x-vector network) while harmful for the back-end (PLDA). To further investigate into the nature of enhanced signals, we hypothesize, and subsequently confirm, that they contain information complementary to the original signals. We combine both signals in front-end and/or back-end to establish this. The newly trained x-vector network with combined data turns out quite powerful as demonstrated by ~40% relative improvements over the baseline. We also make a consistent observation that PLDA is susceptible to enhancement processing. Hence, it is best to train it using original features. The leave-one-out ablation analysis solidifies the notion that DFL enhancement is effective even when a noise class is missing from the training data of enhancement and/or x-vector network. Importantly, we show the limitation of data augmentation by demonstrating the degradation caused by including certain noise classes. Surprisingly, they turn out to be common noise classes used in practice. Finally, we design several dereverberation schemes combining WPE, denoising, and dereverberation in either joint or disjoint fashion. Extensive evaluation suggests ineffectiveness of DFL enhancement for dereverberation while speculating domain-adaption as superior methodology.
We also speculate that findings of this work can vary with the choice of the x-vector network and the evaluation database. Nevertheless, the analysis reported here provides further insight into the deep feature loss based Speaker Verification and addresses its advantages, weaknesses, and extensions.
-  DeLiang Wang and Jitong Chen, “Supervised speech separation based on deep learning: An overview,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 10, pp. 1702–1726, 2018.
-  Yong Xu, Jun Du, Li-Rong Dai, and Chin-Hui Lee, “A regression approach to speech enhancement based on deep neural networks,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 23, no. 1, pp. 7–19, 2015.
-  Santiago Pascual, Antonio Bonafonte, and Joan Serra, “Segan: Speech enhancement generative adversarial network,” arXiv preprint arXiv:1703.09452, 2017.
-  Francois G Germain, Qifeng Chen, and Vladlen Koltun, “Speech denoising with deep feature losses,” arXiv preprint arXiv:1806.10522, 2018.
-  Jaeyoung Kim, Mostafa El-Kharmy, and Jungwon Lee, “End-to-end multi-task denoising for joint sdr and pesq optimization,” arXiv preprint arXiv:1901.09146, 2019.
-  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin, “Attention is all you need,” in Advances in neural information processing systems, 2017, pp. 5998–6008.
-  Jaeyoung Kim, Mostafa El-Khamy, and Jungwon Lee, “Transformer with gaussian weighted self-attention for speech enhancement,” arXiv preprint arXiv:1910.06762, 2019.
-  Joon Son Chung, Arsha Nagrani, Ernesto Coto, Weidi Xie, Mitchell McLaren, Douglas A Reynolds, and Andrew Zisserman, “Voxsrc 2019: The first voxceleb speaker recognition challenge,” arXiv preprint arXiv:1912.02522, 2019.
-  Yue Fan, Jiawen Kang, Lantian Li, Kaicheng Li, Haolin Chen, Sitong Cheng, Pengyuan Zhang, Ziya Zhou, Yunqi Cai, and Dong Wang, “Cn-celeb: a challenging chinese speaker recognition dataset,” arXiv preprint arXiv:1911.01799, 2019.
-  Saurabh Kataria, Phani Sankar Nidadavolu, Jesús Villalba, Nanxin Chen, Paola García, and Najim Dehak, “Feature enhancement with deep feature losses for speaker verification,” arXiv preprint arXiv:1910.11905, 2019.
-  Yanpei Shi, Qiang Huang, and Thomas Hain, “Robust speaker recognition using speech enhancement and attention model,” arXiv preprint arXiv:2001.05031, 2020.
-  Daniel Michelsanti and Zheng-Hua Tan, “Conditional generative adversarial networks for speech enhancement and noise-robust speaker verification,” arXiv preprint arXiv:1709.01703, 2017.
-  Suwon Shon, Hao Tang, and James Glass, “Voiceid loss: Speech enhancement for speaker verification,” arXiv preprint arXiv:1904.03601, 2019.
-  Jesús Villalba, Nanxin Chen, David Snyder, et al., “State-of-the-art speaker recognition with neural network embeddings in nist sre18 and speakers in the wild evaluations,” Computer Speech & Language, p. 101026, 2019.
-  Paola García, Jesus Villalba, Hervé Bredin, Jun Du, Diego Castan, Alejandrina Cristia, Latane Bullock, Ling Guo, Koji Okabe, Phani Sankar Nidadavolu, et al., “Speaker detection in the wild: Lessons learned from jsalt 2019,” arXiv preprint arXiv:1912.00938, 2019.
-  Peter Sibbern Frederiksen, Jesús Villalba, Shinji Watanabe, Zheng-Hua Tan, and Najim Dehak, “Effectiveness of single-channel blstm enhancement for language identification.,” in Interspeech, 2018, pp. 1823–1827.
-  Neville Ryanta, Elika Bergelson, Kenneth Church, Alejandrina Cristia, Jun Du, Sriram Ganapathy, Sanjeev Khudanpur, Diana Kowalski, Mahesh Krishnamoorthy, Rajat Kulshreshta, et al., “Enhancement and analysis of conversational speech: Jsalt 2017,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5154–5158.
Justin Johnson, Alexandre Alahi, and Li Fei-Fei,
“Perceptual losses for real-time style transfer and super-resolution,”in
European conference on computer vision. Springer, 2016, pp. 694–711.
-  Mitchell McLaren, Luciana Ferrer, Diego Castan, et al., “The speakers in the wild (sitw) speaker recognition database.,” in Interspeech, 2016, pp. 818–822.
-  Mark VanDam, Anne S Warlaumont, Elika Bergelson, et al., “Homebank: An online repository of daylong child-centered audio recordings,” in Seminars in speech and language. Thieme Medical Publishers, 2016, vol. 37, pp. 128–142.
-  Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, and Samarjit Das, “Very deep convolutional neural networks for raw waveforms,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, pp. 421–425.
-  Junichi Yamagishi, Christophe Veaux, Kirsten MacDonald, et al., “Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (version 0.92),” 2019.
-  Arsha Nagrani, Joon Son Chung, Weidi Xie, and Andrew Zisserman, “Voxceleb: Large-scale speaker verification in the wild,” Computer Speech & Language, vol. 60, pp. 101027, 2020.
-  Chanwoo Kim and Richard M Stern, “Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis,” in Ninth Annual Conference of the International Speech Communication Association, 2008.
-  Lukas Drude Jahn Heymann and Reinhold Haeb-Umbach, “Wide residual blstm network with discriminative speaker adaptation for robust speech recognition,” in Proceedings of the 4th International Workshop on Speech Processing in Everyday Environments (CHiME’16), 2016, pp. 12–17.
-  Peidong Wang, Ke Tan, and DeLiang Wang, “Bridging the gap between monaural speech enhancement and recognition with distortion-independent acoustic modeling,” arXiv preprint arXiv:1903.04567, 2019.
Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik,
“Rich feature hierarchies for accurate object detection and semantic
Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580–587.
-  Song Han, Huizi Mao, and William J Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
Tomohiro Nakatani, Takuya Yoshioka, Keisuke Kinoshita, Masato Miyoshi, and
“Blind speech dereverberation with multi-channel linear prediction based on short time fourier transform representation,”in 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEE, 2008, pp. 85–88.
-  Phani Sankar Nidadavolu, Saurabh Kataria, Jesús Villalba, and Najim Dehak, “Low-resource domain adaptation for speaker recognition using cycle-gans,” arXiv preprint arXiv:1910.11909, 2019.
-  Phani Sankar Nidadavolu, Saurabh Kataria, Jesús Villalba, Paola Garcia-Perera, and Najim Dehak, “Unsupervised feature enhancement for speaker verification,” arXiv preprint arXiv:1910.11915, 2019.
-  Jesús Villalba, Nanxin Chen, David Snyder, et al., “The jhu-mit system description for nist sre18,” Johns Hopkins University, Baltimore, MD, Tech. Rep, 2018.
-  David Snyder, Jesús Villalba, Nanxin Chen, et al., “The jhu speaker recognition system for the voices 2019 challenge,” Proc. Interspeech 2019, pp. 2468–2472, 2019.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
-  Weicheng Cai, Zexin Cai, Xiang Zhang, Xiaoqi Wang, and Ming Li, “A novel learnable dictionary encoding layer for end-to-end language identification,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5189–5193.
Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song,
“Sphereface: Deep hypersphere embedding for face recognition,”in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 212–220.
-  Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang, “Large-margin softmax loss for convolutional neural networks.,” in ICML, 2016, vol. 2, p. 7.
-  David Snyder, Daniel Garcia-Romero, Gregory Sell, Alan McCree, Daniel Povey, and Sanjeev Khudanpur, “Speaker recognition for multi-speaker conversations using x-vectors,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 5796–5800.
-  Joon Son Chung, Arsha Nagrani, and Andrew Zisserman, “Voxceleb2: Deep speaker recognition,” arXiv preprint arXiv:1806.05622, 2018.
-  Arsha Nagrani, Joon Son Chung, and Andrew Zisserman, “Voxceleb: a large-scale speaker identification dataset,” arXiv preprint arXiv:1706.08612, 2017.
-  David Snyder, Guoguo Chen, and Daniel Povey, “Musan: A music, speech, and noise corpus,” arXiv preprint arXiv:1510.08484, 2015.
-  Joachim Thiemann, Nobutaka Ito, and Emmanuel Vincent, “The diverse environments multi-channel acoustic noise database (demand): A database of multichannel environmental noise recordings,” in Proceedings of Meetings on Acoustics ICA2013. ASA, 2013, vol. 19, p. 035081.
-  Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur, “Librispeech: an asr corpus based on public domain audio books,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 5206–5210.
-  Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, et al., “The kaldi speech recognition toolkit,” in IEEE 2011 workshop on automatic speech recognition and understanding. IEEE Signal Processing Society, 2011.
-  D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur, “X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018.
-  Seyed Omid Sadjadi, Timothée Kheyrkhah, Audrey Tong, Craig S Greenberg, Douglas A Reynolds, Elliot Singer, Lisa P Mason, and Jaime Hernandez-Cordero, “The 2016 nist speaker recognition evaluation.,” in Interspeech, 2017, pp. 1353–1357.
-  Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al., “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems, 2019, pp. 8024–8035.
-  David Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, and Sanjeev Khudanpur, “X-vectors: Robust dnn embeddings for speaker recognition,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5329–5333.
-  Mahesh Kumar Nandwana, Julien Van Hout, Mitchell McLaren, Colleen Richey, Aaron Lawson, and Maria Alejandra Barrios, “The voices from a distance challenge 2019 evaluation plan,” arXiv preprint arXiv:1902.10828, 2019.
-  Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter, “Audio set: An ontology and human-labeled dataset for audio events,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017, pp. 776–780.