Large-scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification

10/12/2021
by   Zhengyang Chen, et al.
0

The speech representations learned from large-scale unlabeled data have shown better generalizability than those from supervised learning and thus attract a lot of interest to be applied for various downstream tasks. In this paper, we explore the limits of speech representations learned by different self-supervised objectives and datasets for automatic speaker verification (ASV), especially with a well-recognized SOTA ASV model, ECAPA-TDNN [1], as a downstream model. The representations from all hidden layers of the pre-trained model are firstly averaged with learnable weights and then fed into the ECAPA-TDNN as input features. The experimental results on Voxceleb dataset show that the weighted average representation is significantly superior to FBank, a conventional handcrafted feature for ASV. Our best single system achieves 0.564 of VoxCeleb1, separately. Accordingly, the ensemble system with three pre-trained models can further improve the EER to 0.431 Among the three evaluation trials, our best system outperforms the winner system [2] of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC2021) on the VoxCeleb1-E trial.

READ FULL TEXT VIEW PDF
07/08/2021

Improved Language Identification Through Cross-Lingual Self-Supervised Learning

Language identification greatly impacts the success of downstream tasks ...
05/16/2022

PRISM: Pre-trained Indeterminate Speaker Representation Model for Speaker Diarization and Speaker Verification

Speaker embedding has been a fundamental feature for speaker-related tas...
04/08/2022

Automatic Pronunciation Assessment using Self-Supervised Speech Representation Learning

Self-supervised learning (SSL) approaches such as wav2vec 2.0 and HuBERT...
04/05/2019

An Unsupervised Autoregressive Model for Speech Representation Learning

This paper proposes a novel unsupervised autoregressive neural model for...
07/01/2021

Pretext Tasks selection for multitask self-supervised speech representation learning

Through solving pretext tasks, self-supervised learning leverages unlabe...
10/22/2020

Momentum Contrast Speaker Representation Learning

Unsupervised representation learning has shown remarkable achievement by...
05/17/2022

Composing General Audio Representation by Fusing Multilayer Features of a Pre-trained Model

Many application studies rely on audio DNN models pre-trained on a large...

Code Repositories

1 Introduction

Recent years have witnessed significant improvements in automatic speaker verification (ASV) tasks. Researchers have developed various neural network architectures

[6, 16, 24, 33], training objectives [31, 34, 12, 29], pooling functions [18, 37]

to push the limits of the system performance. However, these techniques always require large-amount well-labeled data. It is a challenge to collect large-scale labeled data for real applications due to the privacy issue of speaker information. Over the past years, pre-trained models have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks. Inspired by the great success of BERT

[7] and GPT [21], a series of work in the speech community, e.g. wav2vec 2.0 [1] and HuBERT [10]

, have been proposed to leverage large-scale unlabeled data, showing the impressive results on the automatic speech recognition (ASR) tasks.

For the speaker verification field, many researchers have designed specific losses to train the extractor of speaker embeddings from the unlabeled data under an assumption that there is only one speaker in one utterance [35, 30, 2]. Such an assumption may limit the application for un-supervised speaker verification training on the unlimited data from the internet. The Wav2Vec 2.0 [1] and HuBERT [10] rely less on such assumption. These two pre-trained models have shown that they can capture phonetic structure information contained in speech and thus benefit ASR. It is an interesting research topic to probe the nature of the representations learned by different layers of pre-trained models [13, 19]. The effectiveness of Wav2vec 2.0 in a two-stage training process of pre-trained and fine-tuning has been demonstrated on both speaker verification and language recognition tasks in [8]. Besides, [32] introduces a benchmark to evaluate the performance of pre-trained models and shows the better performance of the speech representations learned from large-scale unlabeled data, by comparing with Fbank, on various downstream tasks including ASV. In order to minimize architecture changes and fine-tuning to solve all downstream tasks, the works above only use a simple downstream model and train the system on a small speaker verification dataset Voxceleb1 [17] for ASV task. However, whether the speech representations can also benefit the state-of-the-art (SOTA) ASV systems is still an open question.

In this paper, the speech representations learned from large-scale unlabeled data are extensively investigated on a benchmark dataset for speaker verification. The major contribution of this paper is four-fold as follows:

  1. To the best of our knowledge, it is the first attempt to use the speech representation learned from large-scale unlabeled data to improve the performance of the SOTA speaker verification model (i.e., ECAPA-TDNN [6] ) on Voxceleb dataset.

  2. Instead of using the representations only from the final layer of the pre-trained model, we employ a weighted average of the representations from all hidden layers to fully leverage the speaker-related information embedded in the whole model.

  3. We conduct a comprehensive study on the performance of pre-trained models with different learning methods, model sizes and large-scale training datasets.

  4. A detailed analysis based on learnable weights is performed for probing layer-wise speaker information embedded in the pre-trained models.

2 Related Work

Speech signals contain all kinds of information, such as phonetic structure, emotion, speaker indentity, etc. The Fbank and MFCC are the most commonly used handcrafted acoustic features, which demonstrate sound characteristics in the frequency domain. In addition, researchers have been doing lots of feature engineering to improve their performance, e.g., delta features to capture temporal dynamics of Fbank or MFCC. The authors in

[26] combined the articulation rate filter with the constant Q cepstral coefficients (CQCCs) [27] in the speaker verification task and achieved significant improvement compared to MFCC baseline. In order to make better use of the powerful learning ability of neural networks, Mirco et al. [22] and Jee-weon et al. [14]

have tried to used convolution neural network to learn task-specific features from raw audio signals and achieved comparable performance with handcrafted feature.

Recently, speech representation learning by leveraging unlabeled data is gradually emerging. It is commonly believed that the pre-trained models by self-supervised learning have a good generalizability and a simple classifier added on the top of the representations from these pre-trained models can obtain decent performance for many downstream tasks, even with a limited amount of labeled data. Self-supervised learning for speech representations can be categorized into three approaches: 1) reconstruction learning aims to reconstruct the original input using information extracted from past time steps or masked inputs; 2) Contrastive learning learns high-level representations by solving a contrastive task in the latent embedding space; 3) multi-task learning with multiple objectives and multiple inputs. A review of these approaches is given in

[32].

Figure 1: Leverage Representations from Pre-trained Model

3 Methods

3.1 Pre-train for Representation Learning

In this study, we leverage the representations from Wav2Vec 2.0 [1], HuBERT [10] and UniSpeech-SAT 111UniSpeech-SAT is a submission for ICASSP 2022. Details in https://github.com/microsoft/UniSpeech to do speaker verification task. These three models use different methods to learn the feature representation. The Wav2Vec 2.0 model uses a contrastive loss to distinguish a true speech segment from negatives. The goal of HuBERT is to predict the weak supervised label for the mask frames. UniSpeech-SAT integrates an utterance-wise contrastive loss into Hubert-like representation learning that forces speaker-related information into the learned representation. Despite the different training objectives for the pre-trained models described above, they share the similar model structures. As shown in the left part of Figure 1, these three pre-trained models all consist of a convolutional feature extractor and a deep transformer [28] network as the encoder. Mathematically, given an input wavform where

is the number of sampling points, the CNN feature encoder convolves the sample points to a sequence of feature vector,

. Then the sequence of feature vector is fed to the Transformer model, yielding a hidden state for each frame at the -th layer , where .

3.2 Leverage Representations from pre-trained Model

3.2.1 Downstream Speaker Verification Model

In [8], the authors added an average pooling layer and a fully connected layer with a task-specific loss on the top of pre-trained models and achieved comparable results with the systems using handcrafted features. In [32], x-vector [24] is used as the downstream model. To push the limit of the performance of the downstream task, we use the state-of-the-art speaker verification system ECAPA-TDNN [6] as the downstream model. Compared to x-vector, ECAPA-TDNN has a more advanced design, e.g. Squeeze-Excitation Res2Blocks [11, 9] and multi-layer feature aggregation, which significantly improves system performance. The brief structural framework of ECAPA-TDNN is shown as the right part of Figure 1. The model takes the sequence of the Fbank feature as input. Then, the frame encoder extracts speaker information from each input frame and the statistic pooling layer transforms the variable length input sequence to fix-dimensional representation. Finally, a fully connected (FC) layer is added to extract speaker embedding. To leverage the representations learned from the pre-trained models, we can replace Fbank with the last-layer outputs of pre-trained models and feed them into the ECAPA-TDNN.

width=.98center Pre-training/Down-stream Model Layer # Parameter # Training Data Duration Sources Language HuBERT_Base 12 95M 960 hrs Librispeech English HuBERT_Large 24 316M 60k hrs Librivox English Wav2Vec2.0_Large (XLSR) 24 316M 56k hrs Multilingual LibriSpeech, CommonVoice, BABLE Over 36 languages UniSpeech-SAT_Base 12 95M 94k hrs Librivox, VoxPopuli,Gigaspeech English UniSpeech-SAT_Large 24 316M 94k hrs Librivox, VoxPopuli,Gigaspeech English ECAPA-TDNN (small) [6] - 6M 2.36k hrs Voxceleb2 (Youtube) Multi-Lingual (mostly English)

Table 1: The detailed information of pre-trained models used in our experiments and down-stream task model. For the UniSpeech-SAT_* models, we use Librivox (60k hrs), VoxPopuli (24k hrs) and Gigaspeech (10k hrs, English) to form the 90k hours training data. The layer # column only counts the transformer layer in the pre-training model.

3.2.2 Explore Speaker Information in Pre-trained Model

The pre-trained model, which has seen tons of audio data, should have good generalization for various downstream tasks. However, the results in [8] didn’t show the superiority of the pre-trained representation compared to handcrafted feature. The objectives of the most pre-trained tasks are not directly related to speaker recognition. The layers close to the final objectives will contain more information related to the training loss. It could be better to discover the speaker information from the low layers of the pre-trained model.

Here, similar to the implementation in [32, 20], we introduce a learnable weight, , for hidden states from each layer in pre-trained model. Rather feeding the outputs from the last layer of the pre-trained model, i.e. , to the downstream model, we weighted average the hidden states of each layer to generate the frame representation . Then, we replace the Fbank feature fed into the ECAPA-TDNN with the weighted average representations to extract speaker embedding e:

(1)

Same as the implementation in [6], we also use the additive angular margin (AAM) [5] loss in the training process for model optimization.

The training pipeline is mainly divided into two stages. In the first stage, the pre-trained model is fixed. We only update the ECAPA-TDNN and the weight for all the hidden states. Then, we fine-tune all the parameters for pre-trained model and ECAPA-TDNN.

4 Experimental Setup

To analyze the effectiveness of pre-trained model representation for speaker verification task, we trained and evaluated the downstream speaker verification model using Voxceleb1 [17] and Voxceleb2 [3] datasets. All three official trial lists Vox1-O, Vox1-E and Vox1-H are used to evaluate the system performance. When implementing our baseline models using the handcrafted acoustic feature, we extract 40-dimensional Fbank feature with 25ms window size and 10ms frame shift. We didn’t do voice activity detection (VAD) processing for the Voxceleb data. Besides, we also did data augmentation for the training data using the MUSAN [23] noise and RIR 222https://www.openslr.org/28/

reverberation with probability 0.6 in online mode.

The detailed information about the pre-trained models used in our experiments and the speaker verification downstream models is listed in Table 1. The HuBERT_Base, HuBERT_Large and Wav2vec2.0_Large (XLSR) models are released by Fairseq sequence modeling toolkit 333https://github.com/pytorch/fairseq. The results in [32] show that the Wav2vec2.0_Base performed worse than HuBERT_Base on speaker-related task and we didn’t use it here. UniSpeech-SAT is a model proposed recently, which explicitly models the speaker information in pre-trained. It introduces utterance contrastive loss to model the single speaker information, where the positive instances are hidden states in the same utterance while the negative instances are hidden states in other utterances. Moreover, UniSpeech-SAT uses more synthesis or public available data compared to HuBERT. For downsteam task model, we use the small ECAPA-TDNN in [6].

We trained all the models with Additive Angular Margin Loss (AAM) [5] and set the margin to 0.2. During the training process, we randomly sampled 3s segment from each utterance to construct training batch. For the two-stage training pipeline described in section 3.2.2

, we first fixed the pre-trained model and trained for 165 epochs. Then, we fine-tuned all the parameters for another 10 epochs. Besides, to further improve our best system, we did large margin fine-tuning

[25] by randomly sampling 6s segments and set the AAM margin to 0.5 to train extra 6 epochs.

During the evaluation, we use the cosine score to measure the similarity for trial pairs. We also use the adaptive s-norm [15, 4] to normalize the scores in our experiment. The embeddings extracted from the training set are averaged according to the speaker label and used as the imposter cohort. We set the imposter cohort size to 600 in our experiment. When doing quality-aware score calibration [25], we randomly generated 30k trials based on the voxceleb2 test set to train our calibration model.

width=.45center Feature Aug Pretrain Feature Vox1-O EER (%) Fbank - 3.899 HuBERT_Base Last 3.691 HuBERT_Base Hidden 2.117 Fbank - 2.371 HuBERT_Base Last 3.079 HuBERT_Base Hidden 1.861 UniSpeech-SAT_Base Hidden 1.632

Table 2: Comparison with traditional acoustic feature based on Voxceleb1. Here, we trained all the models on Voxceleb1 dev set and evaluated on Vox1-O trial. We fixed pre-trained model in the training process and only use them to extract speech representation.

5 Evaluation Results

5.1 Comparison with Handcrafted Acoustic Feature

First, we will compare the speech representations extracted from pre-trained models with the commonly used handcrafted feature. The experiments in [8] have shown that Wav2Vec 2.0 pre-trained models contain speaker information and can achieve comparable performance with the handcrafted acoustic feature. Different from [8]

, in our experiments, we directly replaced the handcrafted feature fed to the speaker verification model ECAPA-TDNN with the representations from pre-trained models. Besides, we explored to leverage the representations from pre-trained models in two different ways, using the representation from the last layer or weighted averaging all the hidden representations. The results are shown in Table

2. From the upper part of the table, we find that the last layer representation and all hidden layers’ representation from the pre-trained model both perform better than the handcrafted feature Fbank. Encouragingly, the performance of weighted averaging hidden representation exceeds the Fbank by a very large margin (46% relatively). Then, we augment the training data and the results are listed in the bottom part of Table 2. With data augmentation, all the results are further improved and the weighted averaging hidden representations also shows superiority over the Fbank feature. For the experiments in the following sections, we will use the weighted average hidden representations for pre-trained model and augment the training data.

width=.75center Train Data Large Margin Score Fix Pretrain Feature EER (%) Finetune Calibration Vox1-O Vox1-E Vox1-H Vox1_dev - Fbank 2.371 - - UniSpeech-SAT_Base 1.632 - - HuBERT_Large 1.436 - - Wav2Vec2.0_Large (XLSR) 1.362 - - UniSpeech-SAT_Large 1.249 - - UniSpeech-SAT_Base 1.611 - - HuBERT_Large 1.404 - - Wav2Vec2.0_Large (XLSR) 1.335 - - UniSpeech-SAT_Large 1.218 - - Vox2_dev - Fbank (ECAPA-TDNN small [6]) 1.010 1.240 2.320 - Fbank (ECAPA-TDNN large [6]) 0.870 1.120 2.120 - Fbank 1.080 1.200 2.127 UniSpeech-SAT_Base 1.016 1.139 2.310 HuBERT_Large 0.888 0.912 1.853 Wav2Vec2.0_Large (XLSR) 0.915 0.945 1.895 UniSpeech-SAT_Large 0.771 0.781 1.669 UniSpeech-SAT_Base 0.978 0.987 2.039 HuBERT_Large 0.808 0.822 1.678 Wav2Vec2.0_Large (XLSR) 0.792 0.773 1.582 UniSpeech-SAT_Large 0.713 0.684 1.500 HuBERT_Large 0.649 0.695 1.428 Wav2Vec2.0_Large (XLSR) 0.627 0.643 1.312 UniSpeech-SAT_Large 0.601 0.597 1.321 HuBERT_Large 0.585 0.654 1.342 Wav2Vec2.0_Large (XLSR) 0.564 0.605 1.230 UniSpeech-SAT_Large 0.564 0.561 1.230 Ensemble 0.431 0.507 1.081

Table 3: Results with different pre-trained models and different training strategies. Here, we did data augmentation for all the experiments. In the last Ensemble line, we weighted average the scores of our best three systems after score calibration. The weight in the score average is decided according to the performance of the single system. Besides, the Large model performs much better than the Base model and we only list a part of the results for the Base model because of the space limit.

5.2 Comparison among Different Pre-trained Models

To further improve the effectiveness of the representations from pre-trained models, we trained the model on a larger dataset, Voxceleb2_dev, and compared different pre-trained models and training strategies. All the results are shown in Table 3. The results show that all the large models perform better than Fbank feature on both Vox1_dev and Vox2_dev setup. When we unfix the pre-trained model and jointly fine-tune the pre-trained model and downstream model, further improvements can be achieved. The improvement from pre-trained model fine-tuning is more obvious on Vox2_dev setup than Vox1_dev setup. Besides, the Wav2vec2.0_Large (XLSR) and UniSpeech-SAT_Large pre-trained models perform better than the HuBERT_Large after fine-tuning. As shown in table 1, the training set size of the Wav2vec2.0_Large (XLSR) and HuBERT_Large is comparable. However, the training data for Wav2vec2.0_Large (XLSR) is more diverse and more matched with Voxceleb data, enabling it to be more suitable for this downstream task. As expected, the UniSpeech-SAT_Large model with more training data performs the best among all the pre-trained models. Compared to Fbank feature, representations from this model achieved 30% relative EER improvement on all three trials for the Voxceleb1 evaluation set.

In [25], the authors introduced a large margin fine-tuning strategy and quality-aware score calibration to the speaker verification task and achieved impressive improvement. Here, we also leverage these two strategies in our experiments to push the performance limit. The corresponding results are listed at the bottom part in Table 3. With these two strategies, our best system exceeds the state-of-the-art system [36] (Vox1-O: 0.461, Vox1-E: 0.634, Vox1-H: 0.993) in VoxSRC challenge 2021 on Vox-E trial.

Figure 2: The visualization of the normalized weight values in the proposed architecture show as Figure 1. The output from the layer 0 corresponds to the transformer input. * in the figure means that the pre-trained model is unfixed during downsteam task training. It should be noted that base models only have 12 layers and other models have 24 layers.

5.3 Analysis Speaker Information in Pre-trained Model

The results in Section 5 have shown that it is better to leverage the representations from all the hidden layers rather than the last layer. Thus, it could be necessary and meaningful to explore which layer contains more speaker information than the others. We visualize the normalized weight value for all the layers’ output in Figure 2. The figure shows that the speaker information at the lower layers of pre-trained models is more discriminative than those at the higher layers for ASV task . This phenomenon is reasonable because the training objectives for the pre-trained models used in our experiments are more related to the speech recognition task. For large pre-trained models in our experiments, i.e. UniSpeech-SAT_Large, HuBERT_Large and Wav2vec2.0_Large (XLSR), the learned weights assigned to the higher layers are much smaller than those of lower layers, which indicates that we might be able to directly throw away these higher layers to reduce model size.

6 Conclusion

In this paper, we leverage the representations extracted from pre-trained models trained on large-scale unlabeled data in speaker verification task. In our experiments, we first compared such representations with handcrafted Fbank feature and verify the superiority of pre-trained representations. To comprehensively explore speaker information in the pre-trained model, we make the model learn the weights automatically for all the hidden states of the pre-trained model and achieve significant performance improvement compared to the baseline. By visualizing the learned weights, we find the lower layers of the pre-trained model can capture more speaker-related information than those of higher layers. Despite the significant improvement benefiting from the pre-trained model, there is still a relatively small performance gap (on two evaluation sets) between our system and the best system [36] in the VoxSRC2021 challenge, which has a more aggressive augmentation strategy and dedicated training objectives. In the future, we will incorporate the better training setup in [36] for our system to further push the limit of speaker verification performance.

References

  • [1] A. Baevski, H. Zhou, A. Mohamed, and M. Auli (2020) Wav2vec 2.0: a framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477. Cited by: §1, §1, §3.1.
  • [2] D. Cai, W. Wang, and M. Li (2021) An iterative framework for self-supervised deep speaker representation learning. In Proc. IEEE ICASSP 2021, pp. 6728–6732. Cited by: §1.
  • [3] J. S. Chung, A. Nagrani, and A. Zisserman (2018) Voxceleb2: deep speaker recognition. arXiv preprint arXiv:1806.05622. Cited by: §4.
  • [4] S. Cumani, P. D. Batzu, D. Colibro, C. Vair, P. Laface, and V. Vasilakakis (2011) Comparison of speaker recognition approaches for real applications.. In INTERSPEECH, pp. 2365–2368. Cited by: §4.
  • [5] J. Deng, J. Guo, N. Xue, and S. Zafeiriou (2019)

    Arcface: additive angular margin loss for deep face recognition

    .
    In Proc. CVPR, pp. 4690–4699. Cited by: §3.2.2, §4.
  • [6] B. Desplanques, J. Thienpondt, and K. Demuynck (2020) ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification. In Proc. Interspeech 2020, pp. 3830–3834. Cited by: Large-scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification, item 1, §1, §3.2.1, §3.2.2, Table 1, §4, Table 3.
  • [7] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1.
  • [8] Z. Fan, M. Li, S. Zhou, and B. Xu (2021) Exploring wav2vec 2.0 on Speaker Verification and Language Identification. In Proc. Interspeech 2021, pp. 1509–1513. External Links: Document Cited by: §1, §3.2.1, §3.2.2, §5.1.
  • [9] S. Gao, M. Cheng, K. Zhao, X. Zhang, M. Yang, and P. H. Torr (2019) Res2net: a new multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence. Cited by: §3.2.1.
  • [10] W. Hsu, B. Bolte, Y. H. Tsai, K. Lakhotia, R. Salakhutdinov, and A. Mohamed (2021) HuBERT: self-supervised speech representation learning by masked prediction of hidden units. arXiv preprint arXiv:2106.07447. Cited by: §1, §1, §3.1.
  • [11] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In Proc. CVPR, pp. 7132–7141. Cited by: §3.2.1.
  • [12] Z. Huang, S. Wang, and K. Yu (2018) Angular softmax for short-duration text-independent speaker verification.. In Interspeech, pp. 3623–3627. Cited by: §1.
  • [13] G. Jawahar, B. Sagot, and D. Seddah (2019-07) What does BERT learn about the structure of language?. In Proc. ACL, pp. 3651–3657. External Links: Document Cited by: §1.
  • [14] J. Jung, H. Heo, J. Kim, H. Shim, and H. Yu (2019) Rawnet: advanced end-to-end deep neural network using raw waveforms for text-independent speaker verification. arXiv preprint arXiv:1904.08104. Cited by: §2.
  • [15] Z. N. Karam, W. M. Campbell, and N. Dehak (2011) Towards reduced false-alarms using cohorts. In Proc. IEEE ICASSP 2011, pp. 4512–4515. Cited by: §4.
  • [16] Y. Liu, Y. Qian, N. Chen, T. Fu, Y. Zhang, and K. Yu (2015) Deep feature for text-dependent speaker verification. Speech Communication 73, pp. 1–13. Cited by: §1.
  • [17] A. Nagrani, J. S. Chung, and A. Zisserman (2017) Voxceleb: a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612. Cited by: §1, §4.
  • [18] K. Okabe, T. Koshinaka, and K. Shinoda (2018) Attentive statistics pooling for deep speaker embedding. arXiv preprint arXiv:1803.10963. Cited by: §1.
  • [19] A. Pasad, J. Chou, and K. Livescu (2021) Layer-wise analysis of a self-supervised speech representation model. CoRR abs/2107.04734. External Links: 2107.04734 Cited by: §1.
  • [20] L. Pepino, P. Riera, and L. Ferrer (2021) Emotion Recognition from Speech Using wav2vec 2.0 Embeddings. In Proc. Interspeech 2021, pp. 3400–3404. External Links: Document Cited by: §3.2.2.
  • [21] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. . Cited by: §1.
  • [22] M. Ravanelli and Y. Bengio (2018) Speaker recognition from raw waveform with sincnet. In Proc. IEEE SLT, pp. 1021–1028. Cited by: §2.
  • [23] D. Snyder, G. Chen, and D. Povey (2015) MUSAN: A Music, Speech, and Noise Corpus. Note: arXiv:1510.08484v1 External Links: 1510.08484 Cited by: §4.
  • [24] D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudanpur (2018) X-vectors: robust dnn embeddings for speaker recognition. In Proc. IEEE ICASSP 2018, pp. 5329–5333. Cited by: §1, §3.2.1.
  • [25] J. Thienpondt, B. Desplanques, and K. Demuynck (2021) The idlab voxsrc-20 submission: large margin fine-tuning and quality-aware score calibration in dnn based speaker verification. In Proc. IEEE ICASSP 2021, pp. 5814–5818. Cited by: §4, §4, §5.2.
  • [26] M. Todisco, H. Delgado, and N. W. Evans (2016) Articulation rate filtering of cqcc features for automatic speaker verification.. In Interspeech, pp. 3628–3632. Cited by: §2.
  • [27] M. Todisco, H. Delgado, and N. Evans (2017) Constant q cepstral coefficients: a spoofing countermeasure for automatic speaker verification. Computer Speech & Language 45, pp. 516–535. Cited by: §2.
  • [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Proc. NIPS, pp. 5998–6008. Cited by: §3.1.
  • [29] L. Wan, Q. Wang, A. Papir, and I. L. Moreno (2018) Generalized end-to-end loss for speaker verification. In Proc. IEEE ICASSP 2018, pp. 4879–4883. Cited by: §1.
  • [30] W. Xia, C. Zhang, C. Weng, M. Yu, and D. Yu (2021) Self-supervised text-independent speaker verification using prototypical momentum contrastive learning. In Proc. IEEE ICASSP 2021, pp. 6723–6727. Cited by: §1.
  • [31] X. Xiang, S. Wang, H. Huang, Y. Qian, and K. Yu (2019) Margin matters: towards more discriminative deep neural network embeddings for speaker recognition. In Proc. APSIPA ASC 2019, pp. 1652–1656. Cited by: §1.
  • [32] S. Yang, P. Chi, Y. Chuang, C. J. Lai, K. Lakhotia, Y. Y. Lin, A. T. Liu, J. Shi, X. Chang, G. Lin, et al. (2021) SUPERB: speech processing universal performance benchmark. arXiv preprint arXiv:2105.01051. Cited by: §1, §2, §3.2.1, §3.2.2, §4.
  • [33] H. Zeinali, S. Wang, A. Silnova, P. Matějka, and O. Plchot (2019) But system description to voxceleb speaker recognition challenge 2019. arXiv preprint arXiv:1910.12592. Cited by: §1.
  • [34] C. Zhang and K. Koishida (2017) End-to-end text-independent speaker verification with triplet loss on short utterances.. In Interspeech, pp. 1487–1491. Cited by: §1.
  • [35] H. Zhang, Y. Zou, and H. Wang (2021) Contrastive self-supervised learning for text-independent speaker verification. In Proc. IEEE ICASSP 2021, pp. 6713–6717. Cited by: §1.
  • [36] M. Zhao, Y. Ma, M. Liu, and M. Xu (2021) The speakin system for voxceleb speaker recognition challange 2021. arXiv preprint arXiv:2109.01989. Cited by: Large-scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification, §5.2, §6.
  • [37] Y. Zhu, T. Ko, D. Snyder, B. Mak, and D. Povey (2018) Self-attentive speaker embeddings for text-independent speaker verification.. In Interspeech, Vol. 2018, pp. 3573–3577. Cited by: §1.