End-to-end speech recognition systems based on the Recurrent Neural Network Transducer architecture (RNN-T) have been shown to achieve state-of-the-art performance for large-vocabulary continuous speech recognition . A RNN-T model has also been successfully deployed to run efficiently on mobile devices . The ability to perform efficient on-device speech recognition opens up new avenues to perform on-device personalization of these models without the need for complex server-side infrastructure. More importantly, personalization means that user data and models are stored on users’ devices and not sent to a centralized server, thus increasing data privacy and security.
To personalize speech models on device, we need a learning algorithm that is memory efficient and robust to overfitting. We also need users to provide labeled data, which can be time consuming. To minimize the user effort needed, we investigate ways of improving on-device learning and compare personalization techniques that require different levels of user engagement.
The remainder of this paper is organized as follows. Section 2 describes the on-device speech personalization framework. Section 3 presents several personalization methods that aim at learning new named entities. In Section 4, we describe the Wiki-Names dataset, which we have collected specifically to evaluate the effectiveness of personalization techniques for learning new named entities. Section 5
describes the evaluation metrics we use throughout our experiments. Section6 presents the experimental results.
2 On-device Speech Personalization
Fig. 1 shows how the personalization workflow runs on a mobile device. It consists of two main phases: first, the user interacts with the device by voice. The user’s speech is captured and transcribed into text using an on-device speech recognition system. The user may then optionally edit the recognition outputs to correct errors. The input speech and corrected texts are stored in a training cache on device. During the second phase, data from the training cache is used to perform on-device personalization of the speech model. This phase is usually performed while the device is idle so as to maximize the available memory and compute resources for personalization and preserve user experience. These two phases may take place in an interleaving fashion to achieve an online personalization process, where the system continuously adapts to the user’s voice and usage patterns.
depicts the RNN-T model architecture that will be used for the experiments in this paper. The model consists of an encoder with 8 Long Short Term Memory (LSTM)
layers for the acoustic features, an encoder with 2 LSTM layers for the label sequences (denoted as the language model (LM) component) and a joint network with a single hidden layer. The LM and joint network components are referred to as the decoder. Each LSTM layer has 2048 hidden units along with a projection to 640 units. 3 consecutive frames of 80-dimensional log Mel features (extracted every 10 milliseconds) are stacked together to form the 240-dimensional inputs to the network
. There is also frame stacking after the second encoder layer with a stride size of 2 to increase the frame shift to 60 milliseconds. The network outputs correspond to 75 graphemes and a blank symbol.
3 Vocabulary Acquisition
Vocabulary acquisition is an important aspect of personalizing speech models, providing the system with the flexibility to learn new words that do not appear in the training data. The model becomes attuned to the small set of words that matter to the individual user, such as the names of family and friends.
For an end-to-end sequence model, where the output units are sub-words, the vocabulary of the system is not explicitly defined. If the sub-word units are sufficiently defined to construct all possible words, there are technically no OOV words. However, words that do not appear in the training data will typically be assigned a low probability by the model, leading to recognition errors for rare or unseen words. The output probabilities of the new words can be increased by modifying the decoder to incorporate a biased language model. This includes techniques such as biasing [7, 8, 9], shallow fusion , deep fusion  and cold fusion . Alternatively, the end-to-end model can also be fine-tuned to learn the new words. In the following, we will describe the biasing and model fine-tuning approaches for handling new words.
Biasing is a technique whereby new vocabulary is injected into the speech recognition system by boosting the language model probability of the new words during decoding. This approach does not require retraining of the existing model and can be viewed as a form of language model interpolation. This technique requires only a list of biasing words or phrases. It has been shown in that personalization can be achieved efficiently through new vocabulary injection and biasing the language model on the fly [7, 8].
3.2 Model fine-tuning
A more direct way to learn new words is to fine-tune the model with training data that contains examples of the new words. This requires both speech and the corresponding transcription. High-quality speech transcription is important to train speech recognition models. For server-side training, this is typically outsourced to professional transcribers. However, on-device personalization has to rely solely on individual users to provide training supervision. This would require a substantial amount of user effort and may not be practical for on-device training.
There are several ways to reduce user effort. For example, users may provide a list of named entities important to them, or simply grant access to an existing list, such as contact names, place names and media names used by other apps. A text-to-speech engine can then be used to synthesize the corresponding speech. This requires the same amount of user effort as biasing (cf. Section 3.1). Alternatively, users may provide example sentences that contain the words of interest. This will provide more contextual information for the new words and may improve generalization to unseen contexts. Finally, users may also speak to the system and correct any recognition errors, as illustrated in the personalization workflow in Fig. 1. If the user corrects every recognition error, we can achieve supervised learning. Otherwise, if the user only corrects some of the errors, we can perform semi-supervised learning.
Although the speech RNN-T model has about 117M parameters, our results show that it is possible to fine-tune the entire model and achieve good improvements. However, fine-tuning the full model can easily lead to overfitting. Overfitting can be suppressed by using a lower learning rate and early stopping. This is consistent with the results reported in , where robust domain adaptation can be achieved by fine-tuning the entire model with millions of parameters on a small amount of adaptation data.
There are existing speaker adaptation techniques for neural networks that offer a more compact model representation by having a large number of global parameters shared across different speakers and a smaller number of speaker-specific parameters [13, 14, 15, 16, 17, 18, 19, 20]. These methods are more suitable for a centralized adaptation solution where the adapted models for all speakers are stored on a server.
On the other hand, regularization [21, 22] and data augmentation [23, 24] techniques can also be used to prevent overfitting when training on a small amount of data. They achieve robust model adaptation without modifying the model structure. These approaches are suitable for on-device personalization since the personalized models are stored separately on individual user devices. In the following section, we will describe the Elastic Weight Consolidation (EWC) regularization technique to mitigate the overfitting problem.
3.2.2 Elastic Weight Consolidation
To perform personalization, we first train the model on a multi-speaker corpus, then fine-tune it on a single user. During the second training step, the model “forgets” about the previous task; this problem is called “Catastrophic Forgetting” . We can mitigate this problem using Elastic Weight Consolidation (EWC) , which aims at maximizing , or finding the most probable parameter values, , given some dataset, . By assuming that contains data from two tasks, and , and the model has already been trained on task A, the conditional probability is given by:
is the standard loss function for task B.
can be approximated as a Gaussian distribution with the mean given by the parameters fine-tuned on task A and the precision matrix given by the Fisher Information Matrix,. Therefore, the EWC loss function has an additional regularization term:
where denotes the set of model parameters. is the value of the th parameter fine-tuned for task A. is the penalty weight that can be adjusted to control the amount of regularization. The Fisher Information Matrix, , is used to penalize moving in the directions critical (with high Fisher Information) to the first task. We compute by summing the square gradients of the loss with a model trained on the first task, as described in .
We collected a dataset called Wiki-Names to evaluate the performance of speech personalization. These were sentences extracted from English Wikipedia pages containing repeated occurrences of politician and artist names that are difficult to recognize. To identify the difficult words, we extracted a list of names and ran our baseline speech recognition system on the synthesized speech for these names. The ones that were incorrectly recognized are deemed to be difficult words. Four categories of names were chosen: American, Chinese, Indian and Italian. We recruited 100 participants to read the extracted sentences: each speaker participated in one category only and were comfortable reading the names in their respective category. Nevertheless, the prompts were still difficult to read due to the presence of many name entities. As a consequence, the resulting speech was also often accented and disfluent (with hesitations and corrections).
|Category||No. of||Amount of data (mins)|
Table 1 shows the distribution of the data over the four categories. Each participant provided 50 utterances (4.6 minutes) of training data and 20 utterances (1.9 minutes) of test data. The prompts for each user covered five names, each with 10 training utterances and 4 test utterances, with each name potentially appearing multiple times per utterance. We manually transcribed the data to use as ground truth for both training and evaluation.
We evaluate performance based on word error rate (WER), a standard metric used to evaluate speech recognition systems. In order to measure the performance based on a selected subset of words, we treat our problem as a retrieval task (similar to keyword spotting ). We propose using keyword-dependent precision and recall to measure vocabulary acquisition performance, which can be computed as follows:
where and are the number of keywords in the reference and hypothesis, respectively. is the number of keywords that are correctly recognized. To compute , we first perform edit distance alignment between the reference and hypothesis texts. Then, we count the number of correct matches for the keywords only.
Precision and recall can better explain the performance of a personalization method. A low recall indicates the model’s lack of ability to recognize the new entities (miss), while a low precision indicates over-generation of those name entities (false alarm).
For example, Table 2 shows a pair of aligned reference and hypothesis texts. There are 3 relevant words in the reference (‘Zhuge’, ‘Dan’ and ‘Yangdu’) and 2 relevant words in the hypothesis (‘Zhuge’ and ‘Zhuge’). Only one keyword in the hypothesis has been aligned correctly with the reference. The recall is penalized by the system’s inability to recognize ‘Dan’ and ‘Yangdu’ correctly while the precision is penalized by falsely recognizing ‘Zhuge’ at the end.
Generally, as we train our models to learn new words, we observed that the recall will improve gradually as the model begins to recognize those new words. However, the personalized model also starts to over-generate those words in the output, which reduces precision. Therefore, precision and recall offer a way of measuring the trade-off.
6 Experimental Results
In this section, we present experimental results to evaluate various techniques for personalizing the RNN-T speech recognition model. The architecture of this model is described in Section 3.2. This model was trained with 35 million anonymized hand-transcribed English utterances (∼27,500 hours), from Google’s voice search traffic 
. All models are trained and evaluated using Tensorflow. The RNN-T loss and gradients are computed using the efficient implementation as described in . We fine-tune the models using the momentum optimizer 
for 15 epochs with a learning rate ofand batch size of 5.
6.1 Fine-tune Selected Layers
|Model||# of fine-tuning||WER|
First, we evaluate the WER performance of fine-tuning selected layers of the RNN-T model. Table 3 shows the WER performance of the personalized models fine-tuned using either synthesized (TTS) or the actual (Real) speech data 111 Note that (Real) represents the best case scenario where users correct all the recognition errors to achieve fully supervised training.. The table also shows the corresponding number of fine-tuned parameters. Wiki-Names is a difficult dataset due to the presence of many name entities as well as the accented and disfluent speech. The baseline model has a rather high 67.2% WER, making many errors for the names as well as short words (e.g. ‘a’, ‘an’ and ‘the’).
Fine-tuning the entire model with the actual speech data achieves the lowest WER of 50.4%. If we fine-tune a partial model, it is better to fine-tune the decoder compared to the encoder despite having about 5 times fewer parameters. This is expected because it is much easier for the LM component of the model to learn a new named entity. The additional gains from fine-tuning the entire model may come from learning the speakers’ accents. On the other hand, if we fine-tune the model with synthesized speech, we achieved the best WER performance of 61.8% by updating the LM component only. This is again expected because fine-tuning the rest of the model would over-fit to the synthesized speech.
6.2 Elastic Weight Consolidation
Next, we study the effectiveness of Elastic Weight Consolidation (EWC), as described in Section 3.2.2, in handling a small amount of training data for speech personalization. We evaluate the performance of four personalized speech models (one speaker from each category) on a separate voice search test set, to understand how much the models have deviated from the baseline after personalization. Table 4 shows the average performance of two types of personalized model (by fine-tuning either the LM component or the entire model) with different numbers of training epochs. We compare training without () and with () EWC. The baseline model achieves 7.3% WER on the voice search test set. As expected, WER increases as we personalize the models towards the Wiki-Names data with more training epochs. Without using EWC, we observe that the WER performance on Voice Search increases from 7.3% to 8.3% by fine-tuning the LM component and 11.4% by fine-tuning the entire model. In this case, fine-tuning the LM component only offers a better trade-off between the two test sets. For example, after 15 epochs, it achieves 35.9% WER on Wiki-Names and 8.3% on Voice Search. In comparison, fine-tuning the entire model for 10 epochs achieves a similar WER of 35.2% on Wiki-Names, but with a higher WER of 11.4% on Voice Search. With elastic weight consolidation, we were able to reduce the degradation on voice search. Surprisingly, we also achieve a better WER performance of 28.7% on the Wiki-Names set when fine-tuning the entire model. The penalty term in EWC served as an effective regularizer to mitigate overfitting to a small amount of personalization data, and even assisted with learning the new data.
6.3 Effect of TTS Engine
Next, we study the effect of using different Text-To-Speech (TTS) engines for personalization. We choose four TTS engines that match closely with the four name categories: English (en), Chinese (zh), Indian (en-in) and Italian (it). From the results in Table 5, the choice of TTS engine does not matter so much compared to the gains over the baseline. Nevertheless, using matched TTS engines shows consistent improvements across all four name categories.
6.4 Semi-supervised Learning
So far, we’ve assumed that correct transcriptions are available for training. However, training data for on-device personalization exists only on user devices, and attaining correct transcriptions on-device requires time-consuming effort from users. In a realistic scenario, users may not correct all the errors in the recognition outputs. To better understand the impact of different amounts of user effort on personalization, we measure the breakdown of the precision and recall for the names of interest, as well as for the rest of the words in the sentence (non-names222Non-names may include other named entities that we do not track.). In Table 6, we compare using different types of supervision that require different amounts of user effort for training. The baseline model performs poorly for the names, with a very low recall of 2.7%. It rarely outputs the names, but it does so with a very high precision of 97.9%.
For unsupervised training, we use the recognition output of the baseline model (67.2% WER) as supervision for personalization. This requires no additional effort from the user to make corrections. However, there is no performance gain because the new names are not generated by the recognizer in the unsupervised transcripts. Alternatively, if we use the biased recognition output as supervision, the name recall rate improved to 10.6%. However, the precision for the names, the recall for the non-names as well as the overall WER performance degrade significantly because the model is learning from erroneous transcripts.
For semi-supervised training, we simulate the situation where users only correct the errors that matter (in this case the new named entities of interest). We do so by aligning the baseline recognition outputs with the reference and replace the words in the hypotheses that are aligned to the names in the references. For example, given the aligned reference and hypothesis in Table 2, the name-corrected transcript will be ‘Zhuge Dan was from Yangdu Zhuge’. This assumes that the user corrects the missing ‘Dan’ and the incorrectly recognized ‘Yangdu’. With a relatively small effort from the user (to correct only the names), we see a substantial improvement in recalling those names (3.5% to 68.2%, or 67.0% relative improvement). However, the precision for the names and the recall for the non-names are still much worse than the baseline. The overall relative WER improvement is only about 5.1% (from 67.2% to 63.8%).
6.5 Biasing and TTS Data
So far, we have considered fine-tuning the RNN-T model using full sentences. Next, we also compare with two other personalization techniques that require only the names: 1) biasing (as described in Section 3.1); and 2) fine-tuning the LM component using synthesized speech of the names only.
Table 7 shows the comparison of word error rate as well as the precision and recall performances of four personalization techniques. For this experiment, we use a slightly different decoding setup that includes end-pointing and supports biasing. The results in Table 7 are not directly comparable with the earlier results in the paper, but are self consistent.
Our results show that training with synthesized sentences achieves better results than training with synthesized names only. This is not surprising because the model sees the names in different linguistic contexts during training. Moreover, training with the actual speech data performs better than using synthesized data because the model learns the user-specific pronunciation of the names.
We also observe that biasing can be applied to fine-tuned models and obtain consistent further improvements. With biasing, the recall rate for the names improve as the user provides more supervision signals: 30.1% (biasing only) 48.6% (with TTS sentences) 64.4% (actual speech with name-corrected transcripts) 73.5% (fully supervised). However, biasing increases the number of false alarms for the names, which results in lower precision for names.
6.6 On-device Benchmark
|Batch Size||Memory (GB)||Epoch Time (minutes)|
Finally, we ran benchmark experiments to measure the memory and speed performances for on-device training. Training was done on a Pixel 3 mobile phone using 50 utterances from one speaker over 1 epoch. The benchmark results are shown in Table 8, comparing different batch sizes. The batch size does not affect the memory usage much. Training consumes between 1.5 – 1.8 GB memory with batch sizes 1, 5, 10 and 20. On the other hand, increasing the batch size significantly improves the throughput. With batch size of 1, the time needed to perform one epoch of training is 50 minutes. By increasing the batch size to 20, the training time per epoch reduces by 5 times to 10 minutes. Therefore, training on device for 15 epochs with 5 utterances per mini batch (which is the configuration used for the experiments in this paper) will take about 5.5 hours. Increasing the batch size to 20 will reduce the training time to 2.5 hours. This fits reasonably well within one typical training session when the phone is idle. Moreover, personalization can also be performed over multiple training sessions to progressively improve the model as more user data become available.
This paper investigates the effectiveness of several techniques for personalizing end-to-end speech recognition models on device by learning new named entities. We find that it is possible to fine-tune the entire model with 117 million parameters using a small learning rate and early stopping. Elastic weight consolidation can be used to mitigate overfitting, and achieves a 10.3% relative word error rate improvement on the Wiki-Names dataset. Since supervised on-device training would rely on the user to provide labeled data, we compare training with differing amounts of supervision (and corresponding user effort). We propose using keyword-dependent precision and recall to measure vocabulary acquisition performance. Our results show that biasing combined with synthesized data of sentences can improve the recall of new named entities from 2.4% to 48.6% without the users providing speech inputs and/or correcting the recognition outputs. The name recall rate can be further improved to 64.4% and 73.5% if the user corrects only the errors for the names (semi-supervised) and all the errors (supervised), respectively.
-  Alex Graves, “Sequence transduction with recurrent neural networks,” arXiv preprint arXiv:1211.3711, 2012.
Kanishka Rao, Hasim Sak, and Rohit Prabhavalkar,
“Exploring architectures, data and units for streaming end-to-end
speech recognition with RNN-transducer,”
IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 193–199, 2017.
-  Yanzhang He, Tara N Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, et al., “Streaming end-to-end speech recognition for mobile devices,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6381–6385.
-  Haşim Sak, Andrew Senior, and Françoise Beaufays, “Long short-term memory recurrent neural network architectures for large scale acoustic modeling,” in Fifteenth annual conference of the international speech communication association, 2014, pp. 338–342.
-  Golan Pundak and Tara N Sainath, “Lower frame rate neural network acoustic models.,” in Proc. INTERSPEECH, 2016, pp. 22–26.
-  Shubham Toshniwal, Anjuli Kannan, Chung-Cheng Chiu, Yonghui Wu, Tara N Sainath, and Karen Livescu, “A comparison of techniques for language model integration in encoder-decoder speech recognition,” in 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2018, pp. 369–375.
Keith Hall, Eunjoon Cho, Cyril Allauzen, Françoise Beaufays, Noah
Coccaro, Kaisuke Nakajima, Michael Riley, Brian Roark, David Rybach, and
“Composition-based on-the-fly rescoring for salient n-gram biasing,”in Sixteenth Annual Conference of the International Speech Communication Association, 2015, pp. 1418–1422.
-  Petar Aleksic, Cyril Allauzen, David Elson, Aleksandar Kracun, Diego Melendo Casado, and Pedro J Moreno, “Improved recognition of contact names in voice commands,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 5172–5175.
-  Ian McGraw, Rohit Prabhavalkar, Raziel Alvarez, Montse Gonzalez Arenas, Kanishka Rao, David Rybach, Ouais Alsharif, Haşim Sak, Alexander Gruenstein, Françoise Beaufays, et al., “Personalized speech recognition on mobile devices,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016, pp. 5955–5959.
-  Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio, “On using monolingual corpora in neural machine translation,” arXiv preprint arXiv:1503.03535, 2015.
-  Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates, “Cold fusion: Training seq2seq models together with language models,” arXiv preprint arXiv:1708.06426, 2017.
-  Khe Chai Sim, Arun Narayanan, Ananya Misra, Anshuman Tripathi, Golan Pundak, Tara N Sainath, Parisa Haghani, Bo Li, and Michiel Bacchiani, “Domain adaptation using factorized hidden layer for robust automatic speech recognition,” Proc. Interspeech 2018, pp. 892–896, 2018.
-  Khe Chai Sim, Yanmin Qian, Gautam Mantena, Lahiru Samarakoon, Souvik Kundu, and Tian Tan, “Adaptation of deep neural network acoustic models for robust automatic speech recognition,” in New Era for Robust Speech Recognition, S. Watanabe, M. Delcroix, F. Metze, and J.R. Hershey, Eds., chapter 9, pp. 219–243. Springer, 2017.
-  Bo Li and Khe Chai Sim, “Comparison of discriminative input and output transformations for speaker adaptation in the hybrid nn/hmm systems,” in Eleventh Annual Conference of the International Speech Communication Association, 2010, pp. 526–529.
George Saon, Hagen Soltau, David Nahamoo, and Michael Picheny,
“Speaker adaptation of neural network acoustic models using i-vectors.,”in ASRU, 2013, pp. 55–59.
-  Andrew Senior and Ignacio Lopez-Moreno, “Improving DNN speaker independence with i-vector inputs,” in Proc. ICASSP. IEEE, 2014, pp. 225–229.
-  Pawel Swietojanski and Steve Renals, “Learning hidden unit contributions for unsupervised speaker adaptation of neural network acoustic models,” in Spoken Language Technology Workshop (SLT), 2014 IEEE. IEEE, 2014, pp. 171–176.
-  Tian Tan, Yanmin Qian, Maofan Yin, Yimeng Zhuang, and Kai Yu, “Cluster adaptive training for deep neural network,” in Proc. ICASSP. IEEE, 2015, pp. 4325–4329.
-  Chunyang Wu and Mark JF Gales, “Multi-basis adaptive neural network for rapid adaptation in speech recognition,” in Proc. ICASSP. IEEE, 2015, pp. 4315–4319.
-  Lahiru Samarakoon and Khe Chai Sim, “Factorized hidden layer adaptation for deep neural network based acoustic modeling,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 12, pp. 2241–2250, 2016.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan
“Dropout: a simple way to prevent neural networks from
The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
-  James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell, “Overcoming catastrophic forgetting in neural networks,” CoRR, vol. abs/1612.00796, 2016.
-  Tom Ko, Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur, “Audio augmentation for speech recognition,” in Sixteenth Annual Conference of the International Speech Communication Association, 2015, pp. 3586–3589.
-  R Lippmann, Edward Martin, and D Paul, “Multi-style training for robust isolated-word speech recognition,” in ICASSP’87. IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1987, vol. 12, pp. 705–708.
-  Michael McCloskey and Neal J Cohen, “Catastrophic interference in connectionist networks: The sequential learning problem,” in Psychology of learning and motivation, vol. 24, pp. 109–165. Elsevier, 1989.
-  Yaodong Zhang and James R Glass, “Unsupervised spoken keyword spotting via segmental dtw on gaussian posteriorgrams,” in 2009 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEE, 2009, pp. 398–403.
-  Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng, “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” CoRR, vol. abs/1603.04467, 2016.
-  Tom Bagby, Kanishka Rao, and Khe Chai Sim, “Efficient implementation of recurrent neural network transducer in TensorFlow,” in 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2018, pp. 506–512.
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton,
“On the importance of initialization and momentum in deep learning,”in International conference on machine learning, 2013, pp. 1139–1147.