Attention based on-device streaming speech recognition with large speech corpus

01/02/2020 ∙ by Kwangyoun Kim, et al. ∙ 0

In this paper, we present a new on-device automatic speech recognition (ASR) system based on monotonic chunk-wise attention (MoChA) models trained with large (> 10K hours) corpus. We attained around 90 for general domain mainly by using joint training of connectionist temporal classifier (CTC) and cross entropy (CE) losses, minimum word error rate (MWER) training, layer-wise pre-training and data augmentation methods. In addition, we compressed our models by more than 3.4 times smaller using an iterative hyper low-rank approximation (LRA) method while minimizing the degradation in recognition accuracy. The memory footprint was further reduced with 8-bit quantization to bring down the final model size to lower than 39 MB. For on-demand adaptation, we fused the MoChA models with statistical n-gram models, and we could achieve a relatively 36 (WER) for target domains including the general domain.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Recently, End-to-end (E2E) neural network architectures based on sequence to sequence (seq2seq) learning for automatic speech recognition (ASR) have been gaining lots of attention 

[1, 2]

, mainly because they can learn both the acoustic and the linguistic information, as well as the alignments between them, all simultaneously unlike the conventional ASR systems which were based on the hybrid models of hidden Markov models (HMMs) and deep neural network (DNN) models. Moreover, the E2E models are more suitable to be compressed since they do not need separate phonetic dictionaries and language models, making them one of the best candidates for on-device ASR systems.

Among the various E2E ASR model architectures such as attention-based encoder-decoder models [3]

and recurrent neural network transducer (RNN-T) based models 

[4, 5], we chose to use the attention based method since the accuracy of this method has surpassed that of the conventional HMM-DNN based state-of-the-art ASR systems [6]

. Despite their extreme accuracy, attention models which require full alignment between the input and the output sequences are not capable of providing streaming ASR services. Some researches have been made to address this lack of streaming capabilities of the attention models 

[7, 8, 9]. In [7], an online neural transducer was proposed, which applies the full attention method on chucks of input, and is trained with an additional end-of-chunk symbol. In [8], a hard monotonic attention based model was proposed for streaming decoding with acceptable accuracy degradation. Furthermore, in  [9], a monotonic chunk-wise attention (MoChA) method was proposed, which showed the promising accuracy by loosening a hard monotonic alignment constraint and using a soft attention over a few speech chunks.

In this paper, we explain how we improved our MoChA based ASR system to become an on-device commercialization ready solution. First, we trained the MoChA models by using connectionist temporal classification (CTC) and cross-entropy (CE) losses jointly to learn alignment information precisely. A minimum word error rate (MWER) method, which is a type of sequence-discriminative training, was adopted to optimize the models [10]. Also, for better stability and convergence of model training, we applied a layer-wise pre-training mechanism [11]. Furthermore, in order to compress the models, we present a hyper low-rank matrix approximation (hyper-LRA) method by employing DeepTwist [12] with minimum accuracy degradation. Another important requirement for the commercializing ASR solutions is to boost the recognition accuracy for user context-specific keywords. In order to bias the ASR system during inference time, we fused the MoChA models with statistical n-gram based personalized language models (LMs).

The main contribution of this paper is in successfully building the first ever attention-based streaming ASR system capable of running on devices to the best of our knowledge. We succeeded not only in training MoChA models with large corpus for Korean and English, but also in satisfying the needs of commercial on-device applications.

The rest of this paper is composed as follows: the speech recognition models based on attention methods are explained in a section 2. A section 3 describes how optimization methods improved recognition accuracy, and explanation for the compression algorithm for MoChA models is followed in a section 4. A section 5 describes the n-gram LM fusion for on-demand adaptation, and then discusses the methods and related experiments results in a section 6 and 7.

2 Model Architecture

Attention-based encoder-decoder models are composed of an encoder, a decoder, and an attention block between the two [13]

. The encoder converts an input sequence into a sequence of hidden vector representations referred to as encoder embeddings. The decoder is a generative model which predicts a sequence of the target labels. The attention is used to learn the alignment between the two sequences of the encoder embeddings and the target labels.

2.1 Attention-based speech recognition

The attention-based models can be applied to ASR systems [14, 15] using the following equations (1)-(4).

(1)

where is the speech feature vector sequence, and is the sequence of encoder embeddings. The

can be constructed of bi- or uni-directional long short term memory (LSTM) layers 

[16]. Due to the difference in length of the input and the output sequence, the model is often found to have difficulty in convergence. In order to compensate for this, pooling along the time-axis on the output of intermediate layers of the encoder is used, effectively reducing the length of h to .

(2)

An attention weight , which is often referred as alignment, represents how the encoder embeddings of each frame and the decoder state are correlated [13]

. We employed an additive attention method to compute the correlations. A softmax function converts the attention energy into the probability distribution which is used as attention weight. A weighted sum of the encoder embeddings is computed using the attention weights as,

(3)

where denotes the context vector, and since the encoder embeddings of the entire input frames are used to compute the context, we could name this attention method as full attention. The , which consists of uni-directional LSTM layers, computes the current decoder state from the previous predicted label and the previous context . And the output label is calculated by a block using the current decoder state, the context and the previous output label.

(4)

Typically, the prediction block consists of one or two fully connected layers and a softmax layer to generate a probability score for the target labels. We applied max pooling layer between two fully connected layers. The probability of predicted output sequence

for given x is calculated as in equation (5).

(5)

where is the probability of each output label. Even though the attention-based models have shown state-of-the-art performance, they are not a suitable choice for the streaming ASR, particularly because they are required to calculate the alignment between the current decoder state and the encoder embeddings of the entire input frames.

2.2 Monotonic Chunk-wise Attention

A monotonic chunk-wise attention (MoChA) model is introduced to resolve the streaming incapability of the attention-based models under the assumption that the alignment relationship between speech input and output text sequence should be monotonic [8, 9].

MoChA model computes the context by using two kinds of attentions, a hard monotonic attention and a soft chunkwise attention. The hard monotonic attention is computed as,

(6)

where is the hard monotonic attention used to determine whether to attend the encoder embedding . The attends at encoder embedding to predict next label only if . The equation (6) is computed on , where denotes the attended encoder embedding index for previous output label prediction. The soft chunkwise attention is computed as

(7)

where is the attending point chosen from monotonic attention, is the pre-determined chunk size, and is the chunkwise soft attention weight and is the chunkwise context which is used to predict the output label.

We used the modified additive attention for computing the attention energy in order to ensure model stability [9].

(8)

where , are additional trainable scalars, and others are same as in equation (2). and are computed using equation (8) with own trainable variables.

Figure 1: Model architecture

3 Training and Optimization

The main objective of the attention-based encoder-decoder model is to find parameters which minimize the the cross entropy (CE) between the predicted sequences and the ground truth sequences.

(9)

where is the ground truth label sequence. We trained MoChA models with CTC loss and CE loss jointly for learning the alignment better and the MWER loss based sequence-discriminative training was employed to further improve the accuracy. Moreover in order to ensure the stability in training MoChA models, we adopted the pre-training scheme.

3.1 Joint CTC-CE training

In spite of the different length between the speech feature sequences and the corresponding text sequences, CTC loss induces the model that the total probability of all possible alignment cases between the input and the output sequence is maximized [17]. The CTC loss are defined as follows,

(10)

where are all of the possible alignments generated with {Blank} symbol and the repetition of outputs units for having same length as input speech frames, and is one of them. is the probability about th predicted label is th label in alignment case.

A CTC loss can be readily applicable for training MoChA Model, especially the encoder, because it also leads the alignment between input and output as a monotonic manner. Moreover CTC loss has the advantage of learning alignment in noisy environments and can help to quickly learn the alignment of the attention based model through joint training [18].

The joint training loss is defined as follows,

(11)

where is joint loss of the two losses.

3.2 MWER training

In this paper, the byte-pair encoding (BPE) based sub-word units were used as the output unit of the decoder  [19]. Thus the model is optimized to generate individual BPE output sequences well. However, the eventual goal of the speech recognition is to reduce the word-error rate (WER). Also, since the decoder is used along with a beam search during inference, it is effective to improve the accuracy by directly defining a loss that lowers the exptected WER of candidated beam search results. The loss of MWER is represented as follows,

(12)

where are all the candidates of beams search results, and indicates the number of word error of each beam result sequence . The average word error of all the beam

helps model converging well by reducing the variance.

(13)

The MWER loss,

, also can be easily integrated with the CE loss by linearly interpolating as shown in Equation 

13.

3.3 Layer-wise pre-training

A layer-wise pre-training of the encoder was proposed in [11]

to ensure that the model converges well and has better performance. The initial encoder consists of 2 LSTM layers with a max-pool layer with a pooling factor of 32 in between. After sub-epochs of training, a new LSTM layer and a max-pool layer are added. The total pooling factor 32 is divided into 2 for lower layer and 16 for newly added higher layer. This strategy is repeated until the entire model network is completely built with 6 encoder LSTM layers and 5 max-pool layers with 2 pooling factor. Finally, the total reducing factor is changed to 8, with only the lower 3 max-pool layers having a pooling factor 2.

During pre-training of our MoChA models, when a new LSTM and a max-pool layer were piled up at each stage, the training and validation errors shot up. In order to address this, we employed a learning rate warm-up for every new pre-training stage.

3.4 Spec augmentation

Because end-to-end ASR model learns from the transcribed corpus, large dataset is one of the most important factor to achieve better accuracy. Data augmentation methods have been introduced to generate additional data from the originals, and recently, spec augmentation shows state-of-the-art result on public dataset [20]. spec augmentation masks parts of spectrogram along the time and frequency axis, thus model could learn from masked speech in a lack of information.

4 Low-rank matrix approximation

We adopted a low-rank matrix approximation (LRA) algorithm based on singular value decomposition (SVD) to compress our MoChA model 

[21]. Given a weight matrix , SVD is , where is a diagonal matrix with singular values, and and are unitary matrices. If we specify the rank , the truncated SVD is given by , where , and are the top-left submatrices of , and , respectively. For an LRA, is replaced by the multiplication of and , the number of weight parameters is reduced as . Hence we obtain the compression ratio in memory footprints and computation complexity for matrix-vector multiplication. From LRA, we have an LRA distortion as

(14)

For each layer, given an input , the output error is given by

(15)

where and

represent a bias vector and a non-linear activation function, respectively. Then it propagates through the layers and increases the training loss. In the LRA,

and are updated in backward pass by constraining the weight space of dimensions. However, for large , it is difficult to find the optimal weights due to the reduced dimension of the weight space. Instead, we find an optimal LRA by employing DeepTwist method [12], called a hyper-LRA.

Procedure TrainingModelWeights()
       Result: Weight matrices
       for each iteration  do
             for each layer  do
                   ;
                    // :input
                   if   then
                         ;
                          // in (15)
                         ;
                          // in (14)
                        
                   end if
                  
             end for
            compute the loss ;
             for each layer  do
                   ;
                    // :learning rate
                  
             end for
            
       end for
      for each layer  do
             ;
            
       end for
      
Procedure InferenceModelWeights()
       Result: Weight matrices
       for each layer  do
            
       end for
      
Algorithm 1 The hyper-LRA algorithms

The hyper-LRA algorithm modifies retraining process by adding the LRA distortion to weights and the corresponding errors to the outputs of layers every iterations, where is a distortion period. After retraining, instead of , the multiplication of and is used for the inference model.

Note that the hyper-LRA algorithm optimizes rather than and . In other words, a hyper-LRA method performs weight optimization in the hyperspace of the truncated weight space, which has the same dimension with the original weight space. Therefore the hyper-LRA approach can provide much higher compression ratio than the conventional LRA whereas it requires more computational complexity for retraining process.

5 On-Demand Adaptation

The on-demand adaptation is an important requirement not only for personal devices such as mobiles but also for home appliances such as televisions. We adopted a shallow fusion [22] method incorporated with n-gram LMs at inference time. By interpolating both general LM and domain-specific LMs, we were able to boost the accuracy of the target domains while minimizing degradation in that of the general domain. The probabilities computed from the LMs and the E2E models are interpolated at each beam step as follows,

(16)

where is the number of n-gram LMs, is a posterior distribution computed by the n-gram LMs. The LM distribution was calculated by looking up a probability per each BPE for the given context.

6 Experiment

6.1 Experimental setup

We evaluated with Librispeech corpus which consists of 960 hours of data first and Internal usage data as well. The usage corpus consists of around 10K hours of transcribed speech for Korean and English each and was recorded in mobiles and televisions. We used randomly sampled one hour of usage data as our validation sets for each language. We doubled speech corpus by adding the random noise both for training and for validating. The decoding speed evaluated on Samsung Galaxy S10+ equipped with Samsung Exynos 9820 chipsets, a Mali-G76 MP12 GPU and 12GB of DRAM memory.

A sample rate of speech data was 16kHz and the bits per sample were 16. The speech data were coded with 40-dimensional power mel-frequency filterbank features which are computed by power operation of mel-spectrogram with as the power function coefficient [23]. The frames were computed every 10ms and were windowed by 25ms Hanning window. We split words into 10K word pieces through byte pair encoding (BPE) method for both Korean and English normalized text corpus. Especially for Korean, we reduced the total number of units for Korean characters from 11,172 to 68 by decomposing each Korean character into two or three graphemes depending on the presence of final consonants.

We constructed our ASR system based on ReturNN [24]. In order to speed up the training, we used a multiple GPU training based on the Horovod  [25, 26] all reduce method. And for better convergence of the model, a ramp-up strategy for both the learning rate and the number of workers was used [6]. We used a uniform label smoothing method on the output label distribution of the decoder for regularization, and scheduled sampling was applied at a later stage of training to reduce the mismatch between training and decoding. The initial total pooling factor in Encoder is 32 for Librispeech, but 16 is used for internal usage data due to the training sensitivity, and they reduced into 8 after the pre-training stage. The n-gram LMs were stored in a const arpa structure [27].

6.2 Performance

Encoder Attention Cell size Librispeech Usage KOR Usage ENG
WER(Test-clean) Test-other WER WER
Bi-LSTM Full 1024 4.38% 14.34% 8.58% 8.25%
Uni-LSTM Full 1536 6.27% 18.42% - -
MoChA 1024 6.88% 19.11% 11.34% 10.77%
1536 6.30% 18.41% 9.33% 8.82%
Table 1: The performance of Attention based model depending on the number of direction and the cell size of LSTM layers in encoder. Joint CTC and Label smoothing are applied for all the results, and Data augmentation is only used on Usage data. The beam size of beam search based decoding is 12.
Librispeech
Test-clean Test-other
MoChA (baseline) 6.70% 18.86%
+ Joint CTC 6.30% 18.41%
& Label smoothing
+ Spec augmentation 5.93% 15.98%
+ Joint MWER 5.60% 15.52%
Table 2: Accuracy improvement from the optimizations

We performed several experiments to build the baseline model on each dataset, and evaluated accuracies are shown in Table 1. In the table, and mean bi-directional and uni-directional respectively, and denotes the size of the encoder LSTM cells. The size of attention dimension is same as the encoder cell size, and 1000 was used the size of the decoder. The chunk size of MoChA is 2 for all the experiments since we could not see any significant improvement in accuracy by extending the size more than two.

(a) Bi-LSTM Full Attention
(b) Uni-LSTM Full Attention
(c) Uni-LSTM MoChA
Figure 2: Comparison of alignment by each attention method

As shown in Fig. 2, compared with bi-directional LSTM case, it seems that uni-directional model’s alignment has some time delay because uni-directional LSTM cannot use backward information from input sequence. [28] Alignment calculation with soft full attention may have the advantage of seeing more information and utilizing better context. But for speech recognition, since the alignment of speech utterances is monotonic, that advantage could be not so great.

The accuracy of each trained model with various optimization method are shown in Table 2. The joint weight for joint CTC training was 0.8, and it was gradually increased during the training. We used 13 and 50 for the max size of the frequency-axis mask and the time(frame)-axis mask, respectively, and masks were applied one for each. For joint MWER training, we used 0.6 as and beam size is 4. Spec augmentation made large improvement, especially on test-other, and after Joint MWER training, finally the accuracies in WERs on test-clean and test-other were improved relatively 16.41% and 17.71% respectively compared to that of the baseline.

6.3 Compression

We respectively applied hyper-LRA to weight matrix on each layer, the rank of which was chosen empirically. For encoder LSTM layers, the ranks of the first and the last layers are set larger than those of the internal layers due to the severity of accuracy degradation. The distortion period

was set as the total iterations in one sub-epoch divided by 16. The compressed model was retrained with a whole training data. In addition, we adopted 8-bit quantization both to compress and to speed up by using Tensorflow-lite.  

[29, 30]. As shown in Table 3, the sizes of the models were reduced at least 3.4 times by applying hyper-LRA, and totally more than 13.68 times reduced after 8-bit quantization with minimum degradation of the performance. Furthermore, we were able to compensate the performance by using MWER joint training. At the same time, the decoding speed of Korean and English models got 13.97 and 9.81 times faster than that of baseline models, respectively. The average latency of final models were 140ms and 156ms, and the memory usage during decoding (CODE + DATA) were 230MB and 235MB for Korean and English, respectively.

Bits Hyper Korean English
LRA WER xRT Size WER xRT Size
32 no 9.37 4.89 530.56 9.03 4.32 530.50
32 yes 9.85 0.99 140.18 8.91 1.15 153.98
32 +MWER 9.60 1.26 140.18 8.64 1.48 153.98
8 no 9.64 1.18 132.88 9.07 0.94 132.87
8 yes 10.21 0.33 35.34 9.24 0.38 38.77
8 +MWER 9.80 0.35 35.34 8.88 0.44 38.77
Table 3: Performance for hyper-LRA. The size of models were evaluated in megabytes (MB), and the beam size was 4. xRT denotes real-time factor for decoding speed.

6.4 Personalization

We evaluated our on-demand adaptation method for the three domains in Korean. Names of contacts, IoT devices and applications were used to manipulate utterances with pattern sentences where the names were replaced with a class name like ”call @name”. Individual n-gram LMs were built for each domain using the synthesized corpus. LMs for the specific domains were built within 5 seconds as in Table. 4.

Domain entities patterns utterances time
Contact 2307 23 53061 4.37
App 699 25 17475 1.78
IoT 441 4 1764 0.74
Table 4: Building times for n-gram LMs (in seconds).

As in Table 5, the WER for an App domain was dramatically dropped from 12.76% to 6.78% without any accuracy degradation in a general domain. The additional xRT for the LM fusion was less than 0.15xRT on average even though the number of LM look-up reached millions. The LM sizes for general and for all the three domains were around 43MB and 2MB respectively. All test sets were recorded in mobiles.

Domain Length MoChA Adapted
(in hours) WER xRT WER xRT
General 1.0 9.33 0.35 9.30 0.61
Contact 3.1 15.59 0.34 11.08 0.42
App 1.2 12.76 0.34 6.78 0.48
IoT 1.5 38.83 0.43 21.92 0.52
Table 5: Performance improvement of on-demand adaptation. *xRTs were evaluated on-devices, but WERs were evaluated on servers with the uncompressed MoChA model in Table 1.

7 Discussion

We accomplished to construct the first on-device streaming ASR system based on MoChA models trained with large corpus. In spite of the difficulties in training the MoChA models, we adopted various training strategies such as joint loss training with CTC and MWER, layer-wise pre-training and data augmentation. Moreover, by introducing hyper-LRA, we could reduce the size of our MoChA models to be fit on devices without sacrificing the recognition accuracies. For personalization, we used shallow fusion method with n-gram LMs, it showed improved results on target domains without sacrificing accuracy for a general domain.

References

  • [1] Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, and Zhenyao Zhu, “Exploring neural transducers for end-to-end speech recognition,” 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 206–213, 2017.
  • [2] Rohit Prabhavalkar, Kanishka Rao, Tara N. Sainath, Bo Li, Leif Johnson, and Navdeep Jaitly, “A comparison of sequence-to-sequence models for speech recognition,” in Proc. Interspeech 2017, 2017, pp. 939–943.
  • [3] William Chan, Navdeep Jaitly, Quoc V. Le, and Oriol Vinyals, “Listen, attend and spell,” ArXiv, vol. abs/1508.01211, 2015.
  • [4] Alex Graves, “Sequence transduction with recurrent neural networks,” ArXiv, vol. abs/1211.3711, 2012.
  • [5] Kanishka Rao, Hasim Sak, and Rohit Prabhavalkar, “Exploring architectures, data and units for streaming end-to-end speech recognition with rnn-transducer,” 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 193–199, 2017.
  • [6] Chung-Cheng Chiu, Tara N. Sainath, Yonghui Wu, Rohit Prabhavalkar, Patrick Nguyen, Zhifeng Chen, Anjuli Kannan, Ron J. Weiss, Kanishka Rao, Ekaterina Gonina, Navdeep Jaitly, Bo Li, Jan Chorowski, and Michiel Bacchiani, “State-of-the-art speech recognition with sequence-to-sequence models,” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4774–4778, 2018.
  • [7] Navdeep Jaitly, David Sussillo, Quoc V. Le, Oriol Vinyals, Ilya Sutskever, and Samy Bengio, “A neural transducer,” ArXiv, vol. abs/1511.04868, 2016.
  • [8] Colin Raffel, Thang Luong, Peter J. Liu, Ron J. Weiss, and Douglas Eck, “Online and linear-time attention by enforcing monotonic alignments,” in ICML, 2017.
  • [9] Chung-Cheng Chiu and Colin Raffel, “Monotonic chunkwise attention,” in International Conference on Learning Representations, 2018.
  • [10] Rohit Prabhavalkar, Tara N. Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, Chung-Cheng Chiu, and Anjuli Kannan, “Minimum word error rate training for attention-based sequence-to-sequence models,” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4839–4843, 2018.
  • [11] Albert Zeyer, André Merboldt, Ralf Schlüter, and Hermann Ney, “A comprehensive analysis on attention models,” in Interpretability and Robustness in Audio, Speech, and Language (IRASL) Workshop, Conference on Neural Information Processing Systems (NeurIPS), Montreal, Canada, Dec. 2018.
  • [12] D. Lee, P. Kapoor, and B. Kim, “Deeptwist: Learning model compression via occasional weight distortion,” ArXiv, vol. abs/1810.12823, 2018.
  • [13] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio,

    Neural machine translation by jointly learning to align and translate,”

    in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
  • [14] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, “Attention-based models for speech recognition,” in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, 2015, pp. 577–585.
  • [15] Albert Zeyer, Kazuki Irie, Ralf Schlüter, and Hermann Ney, “Improved training of end-to-end attention models for speech recognition,” in Proc. Interspeech 2018, 2018, pp. 7–11.
  • [16] Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, Nov. 1997.
  • [17] Alex Graves, Santiago Fernández, Faustino J. Gomez, and Jürgen Schmidhuber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in ICML, 2006.
  • [18] Suyoun Kim, Takaaki Hori, and Shinji Watanabe, “Joint ctc-attention based end-to-end speech recognition using multi-task learning,” 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4835–4839, 2017.
  • [19] Rico Sennrich, Barry Haddow, and Alexandra Birch, “Neural machine translation of rare words with subword units,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers, 2016.
  • [20] Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in Proc. Interspeech 2019, 2019, pp. 2613–2617.
  • [21] J. Xue, J. Li, D. Yu, M. Seltzer, and Y. Gong, “Singular value decomposition based low-footprint speaker adaptation and personalization for deep neural network,” in ICASSP, 2014.
  • [22] Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N. Sainath, Zhijeng Chen, and Rohit Prabhavalkar, “An analysis of incorporating an external language model into a sequence-to-sequence model,” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.
  • [23] Chanwoo Kim, Minkyu Shin, Abhinav Garg, and Dhananjaya Gowda, “Improved Vocal Tract Length Perturbation for a State-of-the-Art End-to-End Speech Recognition System,” in Proc. Interspeech 2019, 2019, pp. 739–743.
  • [24] Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilya Kulikov, Ralf Schlüter, and Hermann Ney, “Returnn: The rwth extensible training framework for universal recurrent neural networks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017.
  • [25] Alexander Sergeev and Mike Del Balso, “Horovod: fast and easy distributed deep learning in TensorFlow,” ArXiv, vol. abs/1802.05799, 2018.
  • [26] C. Kim, S. Kim, K. Kim, M. Kumar, J. Kim, K. Lee, C. Han, A. Garg, E. Kim, M. Shin, S. Singh, L. Heck, and D. Gowda, “End-to-end training of a large vocabulary end-to-end speech recognition system,” in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019, accepted.
  • [27] Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely, “The kaldi speech recognition toolkit,” in IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. Dec. 2011, IEEE Signal Processing Society, IEEE Catalog No.: CFP11SRW-USB.
  • [28] Hasim Sak, Andrew W. Senior, Kanishka Rao, and Françoise Beaufays, “Fast and accurate recurrent neural network acoustic models for speech recognition,” ArXiv, vol. abs/1507.06947, 2015.
  • [29] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng,

    “TensorFlow: Large-scale machine learning on heterogeneous systems,” 2015,

    Software available from tensorflow.org.
  • [30] Google Inc., “Tensorflow Lite,” Online documents; https://www.tensorflow.org/lite, 2018, [Online; accessed 2018-02-07].