is one of the most popular semi-supervised learning approaches and recently demonstrated its efficacy in automatic speech recognition. In this approach, a smaller labeled set is used to train an initial seed model, which is applied to a larger amount of unlabeled data to generate hypotheses. The unlabeled data with the most reliable hypotheses are added to the training data for re-training. This process can be repeated iteratively to improve the quality of pseudo labels. However, pseudo-label training is sensitive to the quality of the hypotheses. Errors or noise in labels can cause training unstable and resulted in sub-optimal states, especially for end-to-end speech recognition models . Thus, pseudo-label training usually requires a careful calibration by the confidence measures [14, 25]. But confidence-based data filtering will not always work perfectly since most pseudo-label sequences would contain errors.
Starting from BERT , masked prediction has becoming a new principle to solve problems in self-supervised settings in NLP. The core idea of masked prediction is to force the model to learn good high-level representations of unmasked inputs to infer the targets of masked ones correctly. In speech, the approaches sharing the same spirit have been proposed: masked prediction of audio acoustic features [21, 20], masked prediction of quantized acoustic features  and masked prediction of unsupervised clusters . Experiments in  also showed that computing loss only from the masked regions achieves better performance than all regions.
We draw inspiration from masked prediction and integrate its idea into pseudo-label training. We propose the Gradient Mask to improve pseudo-label training in end-to-end speech recognition. In our approach, we first train a seed model to generate pseudo labels and then use the Gradient Mask to train a student model on the pseudo labels. The model only allows gradients corresponding to masked input back-propagate through the model encoder by masking the gradients corresponding to unmask input. The model is trained by jointly minimizing the loss on labeled and pseudo-label data while the Gradient Mask is turned off on labeled data.
Our training method can force the model to learn strong acoustic representation in order to infer from masked input. Moreover, it can also improve pseudo-label training by making the model less affected by label noise. The intuition is that only gradients of the masked part are used when updating the model’s parameters, so it can avoid the sudden dramatic change in gradients caused by errors and also alleviate the overfit to corrupted labels. Our approach is simple and efficient since it doesn’t require any extra parameters, extra loss or data filtering steps. We run our experiments using the Transducer model . The experiment showed that our method is robust to label noise and can achieve competitive results comparing with other self/semi-supervised approaches in the Librispeech 100 hours experiments.
2 Related work
2.1 Combating noisy labels
and the errors in labels could be extremely harmful to models. Beyond conventional data filtering/cleaning techniques, deep learning techniques have recently gained vast interest. There are several works to investigate supervised learning under noisy labels
in computer vision. However, these models cannot be directly applied to ASR and fewer studies have proposed to combat noise labels for ASR. In, the phonetic sequence was inferred from several noisy transcriptions made by non-native transcribers using a misperception model, and then used to train a conventional hybrid ASR model.  propose a novel loss function that jointly learns the ASR model and a transcription graph that can search for better transcriptions of the training data.
2.2 Joint training with self-supervised and ASR tasks
The idea of self-supervised learning [21, 20, 18, 11, 29, 3, 17, 1] is to learn speech representations that are useful for ASR. By first pre-training on a large amount of unlabeled data using a proxy task, the model can be fine-tuned on labeled data and achieved impressive results. This process is a two-stage process as it requires running separate pre-training and fine-tuning. Joint training with speech recognition and self-supervised representation [19, 32, 35, 34] is the line of work to simplify this process and is the closest to our method. Those methods typically have two training objectives: one is for the ASR task on the labeled data while the other is to train self-supervised representation (e.g. masked feature prediction ), on the unlabeled data. Our method is much simpler and uses only one loss on both label and unlabeled data (pseudo-label data).
In speech recognition, an E2E model predicts the conditional distribution of token sequences given a speech-feature sequence as the input, where and
is acoustic feature vectors at time t. V is the set of all possible output tokens. We will explain and show our method in transducer model, but it can be perfectly adapted to other end-to-end ASR models (e.g. CTC , seq2seq ) as well.
3.1 Transducer model
Transducer model  consists of encoder, prediction network and joint network. The encoder encode the inputs X to higher-level representation .
The prediction network takes embedding vectors of previous non-blank labels as input to produce its output at step
. Then the logits over vocabulary at frameand step can be computed by the joint network:
3.2 Gradient mask
For sequence which has pseudo labels . The objective is to enable model to predict the labels from the masked features. In another word, is trained to be a strong acoustic representation model which can benefit the ASR tasks.
Before feeding features to the encoder, we randomly generated a sequence representing the mask positions for the input sequence . Specifically, is 1 if features are masked at time , otherwise is 0. The features, for example , are masked by replacing it with a learnt mask embedding . Then the encoder encode this mask sequence as :
Our mask strategy is the same as , where we randomly sample without replacement a certain proportion of all time steps to be starting indices and then mask the subsequent consecutive time steps from every sampled index with overlap spans.
When the gradient is back-propagated to the encoder, we masked the gradients corresponding to the non-masked inputs using sequence:
And the prediction network takes the pseudo labels sequence as the input. The joint network then produces output as in (1). But when we do back-propagation, we also block the gradient flow into the predictor network. This process can be expressed in the following functions:
where , is the stop gradient operator. The objectice function is still the same transducer loss where we trying to minimize of all alignment paths.
3.3 Training procedure
The whole training process is similar to the standard pseudo-labeling approach. Let be a labeled dataset and be a large unlabeled dataset. We first train a seed acoustic model M on the labeled dataset . We use this seed acoustic model M to generate pseudo-labeled on dataset and we then combine it with all the label data in L to form new dataset = .
The next step is to train a student model using both the datasets and . The model is trained by alternately minimizing the losses on and . When updating the model parameters using a minibatch from the pseudo-labels dataset , we apply the gradient mask method as described in 3.2 on the model. While on a minibatch from the labeled dataset, we do parameters update in the standard way for transducer in 3.1. This process is repeated until convergence of the word error rate on the validation dataset. Since the loss function is the same for both datasets, we only use one momentum optimizer and the same learning rates for simplicity. The ratio of minibatch from to minibatch from is a hyper-paramtete to be tuned.
We conducted our experiments on the LibriSpeech  datasets. The labeled dataset is a 100 hours subset (train-clean-100) of Librispeech, and the remaining 860 hours (train-clean-360, train-other-500) is the unlabeled dataset. During training, samples in the dataset that are longer than 20 seconds are filtered out. The performance of the trained model is validated on the dev-clean and dev-other datasets of Librispeech and tested on the test-clean/other dataset. We did not use any extra text or LM information for any of our experiments.
We use around 5k subword 
units as our prediction targets. We extracted 80-channel filterbank features computed from a 25ms window with a stride of 10ms. When training on labeled data, we use speed perturbation and SpecAugment[26, 9] with mask parameter (F = 27), and ten time masks with maximum time-mask ratio (pS = 0.05), where the maximum-size of the time mask is set to pS times the length of the utterance.
The filterbank features are first passed into 2 blocks of 2d-conv layers, time reduction layers are added after each block to down-sample the frame rate to 4 before passing into the encoder. The encoder model consists of 17 layers of conformer block, where we set the model dimension to 512, the inner dimension in feed forward layer to 2048, with 8 attention heads, 32 kernal size in convolution block, with the same setting as Conformer-L 
. We use LSTM as our predictor and the LSTM predictor contains 1 layer with 640 units and a projection layer with 640 units. The Transducer’s joint network is a simple feed-forward layer. The total number of parameters is about 130M. Our model is implemented in Pytorch and we optimized our model using Adam. We use this same model in all of our experiments.
For the 100 hours seed model, we first train the GMM-based model in Kaldi  to obtain the alignment results on the 100 hours subset, and we use the frame-wise phoneme label to pre-train the encoder. Then we use the pre-trained encoder to initialize our transducer model . For training the transducer model, we use learning rate warm-up for the first 10k updates to a peak of 1e-4, and hold for 60k steps, then linearly decayed it. We grouped the input sequences by length with a batch size of 10k frames per GPU, and trained the models on 4 GPUs for 160k steps in total.
For training the student model, the mask is set to 0.065 and is set to 3 (equal to 12 frames or 0.12 second). This masking schema is similar to , and it resulted in around half of frames being masked. We set the ratio of minibatch from labeled data to pseudo-label data to 1:9. This ratio is the same as the ratio of amount of data and it produces an ASR model with the best performance. We used learning rate warm-up for the first 10k updates to a peak of 2e-4, and hold for 80k, then linearly decayed it. We grouped the input sequences by length with a batch size of 10k frames per GPU, and trained the models on 8 GPUs for 180k steps.
4.3.1 Supervised baseline
Table 2 shows the results of our seed model and the comparison with the Librispeech 100 hours supervised model from other papers. We use this seed model to generate the first version of the pseudo-label. The resulted 860 hours pseudo-label have WER around 9.
4.3.2 Semi-supervised experiments
Table 1 shows the results from semi-supervised experiments. NST-iter1 is the results of the experiment where we simply mixed the pseudo-label data and labeled data to form the new training dataset, and we train the student model using this dataset. This process is a simplified version of noise student training since we did not do any filtering, LM fusion, or data selection [14, 25].
GM-Iter1 and GM-Iter5 is the model using gradient mask method. For GM-Iter1 in the table are the results from the student model directly trained from the pseudo labels generated by the seed model. Our proposed approach significantly outperforms the 100 hours supervised baselines in table 2 and also the noisy student training baseline. For GM-Iter5, we iterate the pseudo labeling process 5 times. In particular, the model of GM-Iter5 achieved highly competitive performance, with a WER of 4.1/8.8 for dev-clean/dev-other and 4.3/8.9 for test-clean/test-other. It is worth noting that our method is highly efficient. We use much fewer computing resources or a much smaller model size compared with other approaches in the table 1.
4.4 Ablation study and analysis
4.4.1 Pseudo-labeling iterations
To study the performance from different pseudo-labeling iterations. The table 3 shows the WER on test-clean/other of each training iterations using the gradient mask method. The results are in table 3. We stopped this process after the 5th iteration since the improvement is already minimum at the iter5.
|interations||test clean||test other|
4.4.2 Gradient mask on labels of different qualities
We conduct an ablation study to investigate the effect of the gradient mask on labels of different qualities. We run the experiments with and without the gradient mask method on those labels. The training is the same as the standard transducer training when we do not use the gradient mask. The training data includes 860 hours pseudo-label data and the 100h labeled data. Pseudo(WER-9) is the pseudo-label generated by the 100h seed model which has around WER 9. Pseudo(WER-15) is generated by the same supervised system but from an early epoch that has WER around 15. Pseudo(WER-5) is generated by the student model from the 3rd iteration. And Pseudo(WER-2) is generated by an intermediate model trained on 960 hours labeled data.
When pseudo-label contains a lot of errors (WER 15), simply adding pseudo label will cause the model performance to degrade comparing with 100h baseline in table 2. Even when we have high-quality pseudo-label (WER 5), the noise in labels still hurts the model performance. On the other hand, the gradient mask method can be robust to bad quality labels and work well consistently on the label of different quality. We found that the worse pseudo-label’s quality, the better performance we can obtain using the gradient mask method comparing with the standard training. The standard training will perform comparably to the gradient mask method when pseudo-label has WER around 2, and it would perform better when we use the ground truth reference labels.
In this paper, we present the Gradient Mask method, a simple and efficient method to improved pseudo-label training for end-to-end speech recognition. Our method can force the model to learn acoustic representation and also be robust to errors in labels. This method can be used to combat label noise in pseudo-label training. In semi-supervised experiments, our method achieved much better performance than the conventional pseudo label training approach and performed comparably to the SOTA approach while being much computation-efficient. Future work includes exploring the extension to other end-to-end ASR systems like LAS and other sequence to sequence tasks like machine translation.
-  (2020) Wav2vec 2.0: a framework for self-supervised learning of speech representations. arXiv preprint arXiv:2006.11477. Cited by: §2.2, §3.2, Table 1, §4.2.
Listen, attend and spell: a neural network for large vocabulary conversational speech recognition. In ICASSP, pp. 4960–4964. Cited by: §3.
An unsupervised autoregressive model for speech representation learning. In Interspeech, pp. 146–150. Cited by: §2.2.
-  (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pp. 4171–4186. Cited by: §1.
-  (2019) Lead2Gold: towards exploiting the full potential of noisy transcriptions for speech recognition. In ASRU, pp. 78–85. Cited by: §2.1.
-  (2013) Classification in the presence of label noise: a survey. IEEE transactions on neural networks and learning systems 25 (5), pp. 845–869. Cited by: §2.1.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In ICML, pp. 369–376. External Links: Cited by: §3.
-  (2012) Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711. Cited by: §1, §3.1, §3.
-  (2020) Conformer: convolution-augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100. Cited by: §4.1, §4.2.
-  (2016) ASR for under-resourced languages from probabilistic transcription. IEEE/ACM Transactions on Audio, Speech, and Language Processing 25 (1), pp. 50–63. Cited by: §2.1.
-  (2021) HuBERT: self-supervised speech representation learning by masked prediction of hidden units. arXiv preprint arXiv:2106.07447. Cited by: §1, §2.2.
-  (2020) Exploring pre-training with alignments for rnn transducer based end-to-end speech recognition. In ICASSP, pp. 7079–7083. Cited by: §4.2.
-  (2016) Semi-supervised training in deep learning acoustic model. In Interspeech, pp. 3848–3852. Cited by: §1.
-  (2020) Self-training for end-to-end speech recognition. In ICASSP, pp. 7084–7088. Cited by: §1, §4.3.2.
-  (2018) Semi-supervised end-to-end speech recognition.. In Interspeech, pp. 2–6. Cited by: §1.
-  (2020) Slimipl: language-model-free iterative pseudo-labeling. arXiv preprint arXiv:2010.11524. Cited by: §1, Table 1, Table 2.
-  (2020) Deep contextualized acoustic representations for semi-supervised speech recognition. In ICASSP, pp. 6429–6433. Cited by: §2.2.
-  (2020) Decoar 2.0: deep contextualized acoustic representations with vector quantization. arXiv preprint arXiv:2012.06659. Cited by: §1, §2.2.
-  (2020) BERTphone: phonetically-aware encoder representations for utterance-level speaker and language recognition. In Proc. Odyssey, pp. 9–16. Cited by: §2.2.
-  (2020) TERA: Self-supervised Learning of Transformer Encoder Representation for Speech. arXiv preprint arXiv:2007.06028. Cited by: §1, §2.2.
-  (2020) Mockingjay: unsupervised speech representation learning with deep bidirectional transformer encoders. In ICASSP, pp. 6419–6423. Cited by: §1, §2.2.
-  (2019) RWTH asr systems for librispeech: hybrid vs attention–w/o data augmentation. arXiv preprint arXiv:1905.03072. Cited by: Table 2.
-  (2018) Semi-supervised training of acoustic models using lattice-free mmi. In ICASSP, pp. 4844–4848. Cited by: §1.
-  (2015) Librispeech: an asr corpus based on public domain audio books. In ICASSP, pp. 5206–5210. Cited by: §4.1.
-  (2020) Improved noisy student training for automatic speech recognition. arXiv preprint arXiv:2005.09629. Cited by: §1, Table 1, Table 2, §4.3.2.
-  (2019) SpecAugment: a simple data augmentation method for automatic speech recognition. In Interspeech, pp. 2613–2617. Cited by: §4.1.
-  (2019) Lessons from building acoustic models with a million hours of speech. In ICASSP, pp. 6670–6674. Cited by: §1.
-  (2011) The Kaldi speech recognition toolkit. In ASRU, Cited by: §4.2.
-  (2019) Wav2vec: unsupervised pre-training for speech recognition. In Interspeech, pp. 3465–3469. Cited by: §2.2.
-  (2015) Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. Cited by: §4.1.
-  (2020) Learning from noisy labels with deep neural networks: a survey. arXiv preprint arXiv:2007.08199. Cited by: §2.1.
-  (2021) Joint masked cpc and ctc training for asr. In ICASSP, pp. 3045–3049. Cited by: §2.2.
-  (2013) Deep neural network features and semi-supervised training for low resource speech recognition. In ICASSP, pp. 6704–6708. Cited by: §1.
-  (2021) UniSpeech at scale: an empirical study of pre-training method on large-scale speech recognition dataset. arXiv preprint arXiv:2107.05233. Cited by: §2.2.
-  (2021) Unispeech: unified speech representation learning with labeled and unlabeled data. arXiv preprint arXiv:2101.07597. Cited by: §2.2.
-  (2020) Iterative pseudo-labeling for speech recognition. arXiv preprint arXiv:2005.09267. Cited by: §1, Table 1.