Data augmentation has been a successful method for improving generalization performance in Automatic Speech Recognition (ASR). Recently, SpecAugment , an augmentation scheme that directly augments the spectrogram of the input utterance, has shown surprising effectiveness in improving the performance of ASR networks on the 960h Librispeech and 300h Switchboard datasets. One natural question that arises is whether the effectiveness of SpecAugment persists for large scale tasks.
In this paper, we address this question by applying SpecAugment to the Google Multidomain Dataset introduced in . The Google Multidomain Dataset is a large scale multi-domain dataset, with multiple test sets from disparate domains. All data in the data set is anonymized. We compare the performance of the trained network with respect to the various forms of augmentation applied to the data, the results of which are summarized in table 1. In , Multistyle TRaining (MTR) , where a mixed room simulator is used to combine clean audio with a large library of noise audio, is employed to augment the input data. We take this as the baseline when studying the performance of SpecAugment.
|Multistyle TRaining (MTR)||Baseline|
|SpecAugment + MTR||Worse|
|Mix SpecAugment & MTR||Better|
As summarized in table 1, we compare the performance of the network when trained on clean data, data with MTR applied, data with SpecAugment applied, data with both SpecAugment and MTR applied, and data obtained by mixing SpecAugmented and MTR data. We find that SpecAugment, when applied to clean data, performs better than the baseline on all natural test sets, while it performs worse only on a synthetic test set obtained by applying MTR to test utterances. To our surprise, applying SpecAugment on top of MTR degrades performance across most domains. Meanwhile, we are able to achieve improvement across all domains by mixing SpecAugmented data with MTR data.
SpecAugment requires a negligible amount of additional computational resources, does not require additional audio data, can be applied online and is thus highly scalable as the training set becomes large. Our results therefore suggest that SpecAugment can be considered as a serious alternative to more sophisticated resource-heavy augmentation methods.
SpecAugment policies consist of frequency masking, time masking and time warping. The augmentation policies considered in 
have a fixed number of time masks regardless the length of the utterance. On large scale tasks spanning multiple domains, we expect the length of the utterances to have a large variance. We thus introduce adaptive time masking, where the number of time masks and/or the size of the time mask vary depending on the length of the input. We experiment with several adaptive policies on the Google Multidomain Dataset and LibriSpeech 960h. So far, we have not found adaptive time policies that perform better than vanilla SpecAugment on the Google Multidomain Dataset. Meanwhile, we find adaptive policies that yield performance gains on LibriSpeech relative to , as we are able to train a Listen, Attend and Spell  network to have WER on test-clean and WER on test-other.
1.1 Related Work
There is a vast literature on augmentation in ASR, only a part of which we survey here. Artificial data augmentation for low resource speech recognition tasks has been studied in [6, 7]. Vocal Tract Length Perturbation has been introduced in the context of data augmentation for ASR in , and explored further in . Noisy audio signals have been used for augmentation in . Speed perturbation  has been an integral part of augmentation of speech data. The work  studies the effect of using acoustic room simulators. The works [12, 13] examine application of data augmentation for keyword spotting. Drop-out for features have been used for training multi-stream ASR systems in [14, 15]. Systematic omission of frequency channels of the input spectrogram has been studied in the context of CNN ASR networks in [16, 17]. We have commented on SpecAugment  in the introduction.
Data augmentation has also been successfully applied to large scale industrial datasets. As noted earlier, Multistyle TRaining (MTR) is a popular technique where clean audio is combined with background noise using a room simulator . MTR has been successfully applied to HMM-based systems [18, 19] and end-to-end LAS models [5, 20, 21]. A natural question is how SpecAugment compares to or can complement existing data augmentation techniques like MTR, especially on large scale datasets.
Our contribution in this paper is three-fold:
We scale up SpecAugment to large scale industrial datasets. We compare to existing MTR data augmentation, and present how we can improve upon it.
We demonstrate that SpecAugment improves the performance of streaming models.
We present an adaptive version of SpecAugment, where the degree of time masking is adaptive to the input sequence length.
2 SpecAugment and Adaptive Masking
We briefly review SpecAugment in this section, and introduce its adaptive variants. A SpecAugment policy is obtained by composing three basic augmentations—time warping, frequency masking and time masking. We denote the time and frequency dimensions of the spectrogram as and .
Time warping with parameter : A displacement
is chosen from a uniform distribution fromto . A start point is chosen from the time interval . A linear warping function is defined so that the start point is mapped to the point and that the boundary points and are fixed:
Warping is defined so that the warped features (in our case, log-mel frequency coefficients) at time are related to the original features by
We note that the original implementation of time warping presented in , for all practical purposes, is equivalent to this alternative definition.
Frequency masking with parameter : A mask size is chosen from a uniform distribution from 0 to . The consecutive log-mel frequency channels are then masked, where is chosen from .
Time masking with parameter : A mask size is chosen from a uniform distribution from 0 to . The consecutive time steps are masked, where is chosen from .
The SpecAugment policies in  consist of applying these three augmentations a fixed number of times.
In large scale datasets that contain disparate domains of inputs, we expect there to be a large variance in the length of the input audio. Thus, a fixed number of time masks may not be adequate for such tasks, as the time masking may be too weak for longer utterances, or too severe for shorter ones. We thus introduce two different ways time masking can be made adaptive with respect to length of the spectrogram :
Adaptive multiplicity: The number, or multiplicity, of time masks is set to be for the multiplicity ratio .
Adaptive size: The time mask parameter is set to be for the size ratio .
In this paper, we cap the number of time masks at 20 when using adaptive time masking, so that is given by
3.1 LibriSpeech 960h
Our set-up for LibriSpeech 960h is based on that of . We use the model LAS-6-1280 of that work and train with training schedule “L”(ong). We use shallow fusion  with an LSTM language model (LM) with two fusion parameters—the LM weight and coverage penalty . In this work, we use a 3-layer LSTM with width 4096, with a resulting word-level perplexity of 63.6 on the dev-set transcripts. We tune the fusion parameters on the dev-set using grid-search and apply them to the test set to report the final results.
3.1.2 Adaptive SpecAugment Policies
We compare three augmentation policies. The baseline policy is the policy coined “LibriSpeech Double” in . This policy has two frequency masks with , two time masks with which are applied after time warping with .
Let us introduce a hand-crafted adaptive policy, which we denote LibriFullAdapt. This policy has two frequency mask applications with and time masks with both adaptive multiplicity and size with and applied on top of time warping applied with .
We list the results of our training in table 2. We find that the adaptive policy performs better than the fixed policy, and observe gain in performance both before and after shallow fusion with the language model.
|Method||No LM||With LM|
|Lüscher et al., (2019) ||2.3||5.0|
|Kim et al., (2019) ||2.4||8.3|
|Karita et al., (2019) ||2.6||5.7|
|Han et al., (2019) ||2.2||5.8|
|LAS + Baseline SpecAugment||2.8||6.8||2.4||5.7|
|LAS + LibriFullAdapt||2.6||6.0||2.2||5.2|
3.2 Google Multidomain Dataset
3.2.1 Data and Augmentation
We study the effect of SpecAugment when training on the Google Multidomain Dataset . We consider five test sets—Search, Search-Noisy, TTS-Audiobook, Telephony and YouTube—to measure the performance of the network. All training and testing data is anonymized.
|SpecAugBasic + MTR||6.9||9.7||4.5||8.2||10.8|
|Frequency Masking Only||6.4||13.4||4.8||8.0||11.4|
|SpecAugBasic on Clean||6.2||12.9||4.2||7.2||10.3|
|SpecAugBasic & MTR (20%) Mixed||6.3||9.4||4.2||7.2||10.4|
As a baseline for our experiments, we augment the input data by using a room simulator described in 
. For training, various factors of the room simulator, including room-size, reverberation time, microphone positions, speech and noise sources, signal to noise ratio are randomly selected and applied to all input utterances. The injected noise is sampled from either anonymized YouTube audio or a collection of real-life noises. The test set Search-Noisy is constructed by applying these perturbations to the Search test set.
The network input is a log-mel frequency spectrogram obtained from the audio using 32 msec frame windows with 10 msec shift. The log-mel frequency coefficients have 128 dimensions, and are stacked with height 512 with stride 3. The text is tokenized using a Word Piece Model (WPM) of vocabulary size 4k.
We consider five different input configurations: MTR data, clean data, MTR data with SpecAugment applied, clean data with SpecAugment applied and finally data obtained by mixing clean data with SpecAugment applied and MTR data with an 8:2 ratio. Augmentation is applied to the spectrogram after unstacking the features to obtain an array of 128 dimensional features. The augmented spectrogram is then restacked to the original form and fed into the acoustic model.
We present the result of training with a vanilla SpecAugment policy, which we denote SpecAugBasic. This policy has two frequency masks and two time masks with . Time warping has not been used. As a control experiment, we also train the network on data augmented only using frequency masking with two masks of .
3.2.2 SpecAugmemt on RNN Transducer (RNN-T)
We train an RNN-T model described in . The encoder is an 8-layer uni-directional LSTM with cell size 2048, while the decoder is a 2-layer LSTM with the same cell size. No language model is used.
We note that this model produces weaker context information due to its streaming nature. We nevertheless get gains from time masking, as we demonstrate shortly.
As explained in , our RNN-T model heavily relies on layer normalization . Note that the application of time masks make the variance of hidden activations vanish, which destabilizes training in the presence of layer normalization. Even when using an aggressive variance floor, this still leads to huge gradients when the network becomes deeper. To alleviate this instability, we add Gaussian noise to the time masked regions, which stabilizes training.
The results of training the acoustic model using the different augmentation methods are presented in table 3. Note that when SpecAugment is applied on top of MTR, the performance degrades below the baseline across all test sets.
Meanwhile, we find that when SpecAugBasic is applied to the clean utterances, it out-performs the baseline across all “natural test sets,” while it performs worse on the synthetic test set obtained by applying MTR to Search-domain utterances. This degradation, however, can be addressed by ensembling SpecAugmented data with MTR data, as shown in the last row of the table.
We note that while we have experimented with adaptive time masking policies, we have not discovered one that out-performs fixed policy SpecAugBasic. The benefit of adaptive time masking on this dataset has yet to be seen.
We emphasize that the trained model is a streaming model, whose performance SpecAugment is still able to noticeably improve. Furthermore, we see that time masking plays an important role in improving the performance of this network, which is evident from the evaluation results on the YouTube dataset.
4 Summary and Discussion
We find that SpecAugment, despite its simplicity, yields better gains on large scale datasets compared to time-tested and more sophisticated augmentation methods. Given the computational advantage that SpecAugment has, we find it has rich potential for being incorporated into the data pipeline of industrial-scale tasks.
We have introduced adaptive time-masking for SpecAugment. While we have not been able to find an adaptive policy that out-performs a non-adaptive policy on the Google Multidomain Dataset, we have demonstrated the effectiveness of adaptive masking on LibriSpeech 960h. We expect further exploration of adaptive masking to bring improvements when SpecAugment is applied to large scale tasks.
We thank Yuan Cao, Ekin Dogus Cubuk, Yanping Huang, Luke Metz, Arun Narayanan, Ruoming Pang, Tara Sainath, Qizhe Xie and Barret Zoph for useful discussions and helping with our experiments.
-  Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le, “Specaugment: A simple data augmentation method for automatic speech recognition,” in Interspeech, 2019.
-  Arun Narayanan, Ananya Misra, Khe Chai Sim, Golan Pundak, Anshuman Tripathi, Mohamed Elfeky, Parisa Haghani, Trevor Strohman, and Michiel Bacchiani, “Toward domain-invariant speech recognition via large scale training,” in arXiv, 2018.
Chanwoo Kim, Ananya Misra, Kean Chin, Thad Hughes, Arun Narayanan, Tara
Sainath, and Michiel Bacchiani,
“Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in Google Home,”in Interspeech, 2017.
-  Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur, “Librispeech: An ASR corpus based on public domain audio books,” in ICASSP, 2015.
-  William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in ICASSP, 2016.
-  Naoyuki Kanda, Ryu Takeda, and Yasunari Obuchi, “Elastic spectral distortion for low resource speech recognition with deep neural networks,” in ASRU, 2013.
-  Anton Ragni, Kate M. Knill, Shakti P. Rath, and Mark J. F. Gales, “Data augmentation for low resource languages,” in Interspeech, 2014.
Navdeep Jaitly and Geoffrey Hinton,
“Vocal Tract Length Perturbation (VTLP) improves speech
ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013.
-  Chanwoo Kim, Minkyu Shin, Abhinav Garg, and Dhananjaya Gowda, “Improved Vocal Tract Length Perturbation for a State-of-the-Art End-to-End Speech Recognition System,” in Interspeech, 2019.
-  Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, and Andrew Ng, “Deep Speech: Scaling up end-to-end speech recognition,” in arXiv, 2014.
-  Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur, “Audio Augmentation for Speech Recognition,” in Interspeech, 2015.
-  R. Prabhavalkar, R. Alvarez, C. Parada, P. Nakkiran, and T. N. Sainath, “Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks,” in ICASSP, 2015.
-  Anirudh Raju, Sankaran Panchapagesan, Xing Liu, Arindam Mandal, and Nikko Strom, “Data Augmentation for Robust Keyword Spotting under Playback Interference,” in arXiv, 2018.
-  Sri Harish Mallidi and Hynek Hermansky, “Novel neural network based fusion for Multistream ASR,” in ICASSP, 2016.
-  György Kovács, László Tóth, Dirk Van Compernolle, and Marcus Liwicki, “Examining the Combination of Multi-Band Processing and Channel Dropout for Robust Speech Recognition,” in Interspeech, 2019.
-  György Kovács, László Tóth, Dirk Van Compernolle, and Sriram Ganapathy, “Increasing the robustness of cnn acoustic models using autoregressive moving average spectrogram features and channel dropout,” Pattern Recognition Letters, vol. 100, pp. 44–50, 2017.
-  László Tóth, György Kovács, and Dirk Van Compernolle, “A perceptually inspired data augmentation method for noise robust cnn acoustic models,” in SPECOM, 2018.
-  Tara Sainath, Ron Weiss, Kevin Wilson, Andrew Senior, and Oriol Vinyals, “Learning the SpeechFront-end With Raw Waveform CLDNNs,” in Interspeech, 2015.
-  Bo Li, Tara Sainath, Arun Narayanan, Joe Caroselli, Michiel Bacchiani, Ananya Misra, Izhak Shafran, Hasim Sak, Golan Pundak, Kean Chin, Khe Chai Sim, Ron Weiss, Kevin Wilson, Ehsan Variani, Chanwoo Kim, Olivier Siohan, Mitchel Weintraub, Erik McDermott, Rick Rose, and Matt Shannon, “Acoustic Modeling for Google Home,” in Interspeech, 2017.
-  Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, and Michiel Bacchiani, “State-of-the-art Speech Recognition With Sequence-to-Sequence Models,” in ICASSP, 2018.
-  Chung-Cheng Chiu, Anshuman Tripathi, Katherine Chou, Chris Co, Navdeep Jaitly, Diana Jaunzeikare, Anjuli Kannan, Patrick Nguyen, Hasim Sak, Ananth Sankar, Justin Tansuwan, Nathan Wan, Yonghui Wu, and Xuedong Zhang, “Speech recognition for medical conversations,” in Interspeech, 2018.
Çaglar Gülçehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho,
Loïc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and
“On using monolingual corpora in neural machine translation,”in arxiv, 2015.
-  Jan Chorowski and Navdeep Jaitly, “Towards better decoding and language model integration in sequence to sequence models,” in Interspeech, 2017.
-  Christoph Lüscher, Eugen Beck, Kazuki Irie, Markus Kitza, Wilfried Michel, Albert Zeyer, Ralf Schlüter, and Hermann Ney, “RWTH ASR systems for librispeech: Hybrid vs attention - w/o data augmentation,” in Interspeech, 2019.
-  Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, Shinji Watanabe, Takenori Yoshimura, and Wangyou Zhang, “A comparative study on transformer vs rnn in speech applications,” in arXiv, 2019.
-  Kyu J. Han, Ramon Prieto, Kaixing Wu, and Tao Ma, “State-of-the-art speech recognition using multi-stream self-attention with dilated 1d convolutions,” in arXiv, 2019.
-  Mike Schuster and Kaisuke Nakajima, “Japanese and korean voice search,” in ICASSP, 2012.
-  Yanzhang He, Tara N. Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, Qiao Liang, Deepti Bhatia, Yuan Shangguan, Bo Li, Golan Pundak, Khe Chai Sim, Tom Bagby, Shuo yiin Chang, Kanishka Rao, and Alexander Gruenstein, “Streaming end-to-end speech recognition for mobile devices,” in ICASSP, 2019.
-  Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton, “Layer normalization,” in arXiv, 2016.