. Deep neural networks can not only extract acoustic features, which are used as inputs to traditional ASR models like Hidden Markov Models (HMM)[24, 28], but also act as sequence transducers, which results in end-to-end neural ASR systems [2, 6].
One major challenge of sequence transduction is that the input and output sequences differ in lengths, and both lengths are variable. As a result, a speech transducer has to learn both the alignment and the mapping between acoustic inputs and linguistic outputs simultaneously. Several neural network-based speech models have been proposed during the past years to solve this challenge. In this work, we focus on understanding the differences between these transduction mechanisms. Specifically, we compare three transduction models - Connectionist Temporal Classification (CTC) , RNN-Transducer , and sequence-to-sequence (Seq2Seq) with attention [5, 3]. For the ASR task, these models differ mainly along assumptions made in these three axes:
Conditional independence between predictions at different time steps, given audio
. This is not a reasonable assumption for the ASR task. CTC makes this assumption, but RNN-Transducers and Attention models do not.
The alignment between input and output units is monotonic. This is a reasonable assumption for the ASR task, which enables models to do streaming transcription. CTC and RNN-Transducers make this assumption, but Attention models 111Here we focus on the vanilla Seq2Seq models with full attention [6, 3], though there exist some efforts in enforcing local and monotonic attention recently, and they typically results in a loss in performance do not.
Hard vs Soft alignments. CTC and RNN-Transducer models explicitly treat alignment between input and output as a latent variable and marginalize over all possible hard alignments while the attention mechanism models a soft alignment between each output step and every input step. It is unclear if this matters to the ASR task.
There are no conclusive studies comparing these architectures at scale. In this work, we train all three models on the same datasets using the same methodology, in order to perform a fair comparison. Models which do not assume conditional independence between predictions given the full input (viz, RNN-Transducers, Attention) are able to learn an implicit language model from the training corpus and optimize WER more directly than other models. We find that they therefore perform quite competitively, even outperforming CTC + LM models without the use of an external language model. Among them, RNN-Transducers have the simplest decoding procedure and fewer hyper-parameters to tune.
In the following sections, we will first revisit the three models, and describe interesting specific details of our implementations. Then, in section 3, we present our results on the Hub5’00 benchmark (which uses hours of training data), and our own internal dataset (of hours). In section 4 we study how well they train when using only forward-only layers, and when we do excessive pooling in the encoder layers on the WSJ dataset by controlling the number of parameters in each model. Section 6 presents related work and Section 7 summarizes the key takeaways and presents the scope of future work.
Illustration of probability transitions of three transducers on an utterance of length 5 and labelled as “CAT”. The node at(horizontal axis), (vertical axis) represents the probability of having output the first elements of the output sequence by point in the transcription sequence. The vertical arrow represents predicting multiple characters at one time step (not allowed for CTC). The horizontal arrow represents predicting repeating characters (for CTC) or predicting nothing (for RNN-Transducer). The solid arrows represent hard alignments (for CTC and RNN-Transducer) and soft ones (for Attention). As noticed, in CTC and RNN-Transducer, states can only move towards the top right direction one step by one, while in Attention, all input frames could potentially be attended in any decoding step.
2 Neural Speech Transducers
A speech transducer is typically composed of an encoder (also known as acoustic model), which transforms the acoustic inputs into high level representations, and a decoder, which produces linguistic outputs (i.e,
characters or words) from the encoded representations. The challenge is the input and output sequences have variable (also different) lengths, and usually alignments between them are unavailable. So neural transducers have to learn both the classification from acoustic features to linguistic predictions as well as the alignment between them. Transducer models differ in the formulations of the classifier and the aligner.
More formally, given the the input sequence of length , and the output sequence of length , with each being a
dimensional one-hot vector, transducers model the conditional distribution. The encoder maps the input into a high level representation , which can be shorter than the input (
) with time-scale downsampling. The encoder can be built with feed-forward neural networks (DNNs)
, recurrent neural networks (RNNs)
, or convolution neural networks (CNNs). The decoder defines the alignment(s) and the mapping from to .
CTC [12, 2] computes the conditional probability by marginalizing all possible alignments and it assumes conditional independence between output predictions at different time steps given aligned inputs. An extra ‘blank’ label, which can be interpreted as no label, is introduced to map and to the same length, i.e, an alignment (path) is obtained by inserting ( - ) blanks into . A mapping is defined between and , which can be done by removing all blanks and repeating letters in . The conditional probability can be efficiently calculated using a forward-backward dynamic-programming algorithm, as detailed in . Note that the alignments are both local and monotonic.
where we use the conventional definition of softmax222. The CTC output could be decoded by greedily picking the most likely label at each time-step333strictly speaking, this finds the most likely alignment, not , but we find that for a fully trained model is dominated by a single alignment . To make beam search effective, the conditional independence assumption is artificially broken by the inclusion of a language model, and decoding is then the task of finding the argmax of
. The above equation presents a discrepancy between how these models are trained and tested. To address this, models could be further fine-tuned with a loss function that also incorporates language model information like sMBR, but the principle issue is still the absence of dependence between predictions.
RNN-Transducer [11, 14] also marginalizes over all possible alignments, like CTC does, while extending CTC by additionally modeling the dependencies between outputs at different timesteps . More specifically, the prediction of at time step depends on not only aligned input but also the previous predictions .
where donates the output timestep aligned to the input timestep . An extra recurrent network is used to help determine
by predicting decoder logits, and the conditional distribution at time is computed by normalizing the summation of the and the :
could be any parametric function, we use as in . Like in CTC, the marginalized alignments are local and monotonic, and the likelihood of the label can be calculated efficiently using dynamic programming. Decoding uses beam search as in , but we do not use length normalization as originally suggested, since we do not find it necessary.
2.3 Attention Model
Attention model [8, 3, 5] aligns the inputs and outputs using the attention mechanism. Like RNN-transducer, attention model removes the conditional independence assumption in the label sequence that CTC makes. Unlike CTC and RNN-transducer however, it does not assume monotonic alignment, nor does it explicitly marginalize over alignments. It computes by picking a soft alignment between each output step and every input step.
where is the context for decoding timestep , which is computed as the sum of the entire weighted by (known as attention).
where is the hidden states of the decoder at decoding step . There exist different ways [6, 3] to compute . We used a location-aware hybrid attention mechanism in our experiments, which can be described as:
The attention mechanism allows the model to attend anywhere in the input sequence at each time, and thus the alignments can be non-local and non-monotonic. However, this excessive generality comes with a more complicated decoding for the ASR task, since these models can both terminate prematurely as well as never terminate by repeatedly attending over the same encoding steps. Therefore, the decoding task finds the argmax of
is the length normalization hyperparameter. The coverage term “cov” encourages the model to attend over all encoder time steps, and stops rewarding repeated attendance over the same time steps. The coverage term addresses both short as well as infinitely long decoding.
3 Performance at Scale
In this section, we compare the performance of the models on a public benchmark as well as our own internal dataset.
The promise of end-to-end models for ASR was the simplification of the training and inference pipelines of speech systems. End-to-end CTC models only simplified the training process, but inference still involves decoding with massive language models, which often requires teams to build and maintain complicated decoders. Since attention and RNN-Transducers implicitly learn a language model from the speech training corpus, rescoring or decoding using language models trained solely from the text of the speech corpus, does not contribute to improvements in WER (Table 1). When an external LM trained on more data is available, simply rescoring the final beam (typically small, between 32 and 256) recovers all the performance difference (Table 3
). The decoding and beam search is therefore simplified, can be expressed as neural network operations and need not support massive language models. This trend is already seen in the neural machine translation tasks, where state-of-art NMT systems do not typically use an external language model.
3.1 Hub5’00 results
The performance of the models on the Hub5’00 benchmark is presented in Table 1 along with other published results on in-domain data. All of the models in Table 1 use the standard language model that is paired with the dataset, except for the rows marked “NO LM”. Without using any language model, both the attention and RNN-Transducer models outperform the CTC model trained on the same corpus, and are highly competitive with the best results on this dataset. Since the LM is also trained on the same training corpus, rescoring with the LM has little effect on attention and RNN-Transducer models.
We found that beam search in attention worked best when using only length normalization (, in Equation 16). However, as the distribution of errors in Table 2 show, the RNN-Transducer has no obvious problems with pre-mature termination as the number of deletions is very small even though there is no length normalization. Attention and RNN-Transducer both use a beam width of 32.
|BLSTM + LF MMI ||8.5||15.3|
|LACE + LF MMI 444An unreported result using RNN-LM trained on in-domain text could be better than this result ||8.3||14.8|
|Dilated convolutions ||7.7||14.5|
|CTC + Gram-CTC ||7.3||14.7|
|BLSTM + Feature fusion||7.2||12.7|
|Beam Search NO LM||8.5||16.4|
|Beam Search + LM||8.1||17.5|
|Beam Search NO LM||8.6||17.8|
|Beam Search + LM||8.6||17.8|
|Beam search + LM (beam=2000)||15.9||16.44|
|Beam search (beam=32)||17.41||-|
|+ LM rescoring||15.6||16.50|
|Beam search (beam=256)||18.71||-|
|+ Length-norm weight||19.5||-|
|+ Coverage cost||18.9||-|
|+ LM rescoring||16.0||16.48|
|Attention||i want to get to get to get to get to|
|get to get to get to get to do that|
|Ground Truth||play the black eyed peas songs|
|+ Greedy||lading to black irpen songs|
|+ Beam Search + LM||leading to black european songs|
|+ Greedy||play the black eye piece songs|
|+ Beam Search||play the black eye piece songs|
|+ LM rescore||play the black eyed peas songs|
|+ Greedy||play the black eyed pea songs|
|+ Beam Search||play the black eyed pea songs|
|+ LM rescore||play the black eyed peas songs|
3.2 DeepSpeech corpus
The DeepSpeech corpus contains about hours of speech in a diverse set of scenarios, such as far-field, with background noise, accents etc., Additionally, the train and targets sets are drawn from a different distribution since we don’t have access to large volumes of data from the target distribution. We rely on external language models trained on significantly larger corpus of text to close the gap between train and test distributions. This setting therefore provides us the best opportunity to study the impact of language models on attention and RNN-Transducers.
On the development set, note that RNN-Transducer model matches the performance of the best CTC model within 1.5 WER without any language model, and completely closes the gap by rescoring the resulting beam of only 32 candidates. Surprisingly, attention models start from a WER similar to that of CTC models after greedy decoding, but the two architectures make very different errors. CTC models have a poorer WER mainly because of mis-spellings, but the relatively higher WER of attention models could be largely attributed to noisy utterances. In these cases, the attention models act similar to a language model and arbitrarily output characters while repeatedly attending over the same encoder time steps. While the coverage term in Equation 16 helps address this issue during beam search, the greedy decoding cannot be improved. An example of this situation is shown in Table 4. The monotonic left-to-right decoding of CTC and RNN-Transducers naturally avoid these issues. Further, the coverage term only helps keep the correct answers in the beam and language model rescoring of the final beam is still required to bring the correct answers back to the top.
3.3 Experimental details
. Throughout the paper, all audio data is sampled at 16kHz and normalized to a constant power. Log-Linear or Log-Mel spectrograms (the specific type of featurization is a hyper-parameter we tune over) are extracted with a hop size of 10ms and window size of 20ms, and then globally normalized so that each input spectrogram bin has zero mean and unit variance. We do not use speaker information in any of our models. Every epoch, 40% of the utterances are randomly selected to add background noise to.
All models in Table 1, were trained on the standard Fisher-Swbd dataset comprising of the LDC corpora (97S62, 2004S13, 2004T19, 2005S13, 2005T19). We use a portion of the RT02 corpus (2004S11) for hyper-parameter tuning. The language model used for decoding the CTC model as well as when rescoring the other models is the same 4-gram LM available for this benchmark from the Kaldi receipe . The language model used by all models in Table 3 is built from a sample of the common crawl dataset .
Model specification. All models in Tables 1 and 3 are tuned independent of each other - we perform a random search over encoder and decoder sizes, amount of pooling, minibatch size, choice of optimizer, learning and annealing rates. Further, no constraints are placed on any model, in terms of number of parameters, wall clock time, or others.
The training procedure mainly follows 555 We also find that these encoder layers could be replaced with LSTM layers with tanh activation, weight noise, and no batch normalization. In most cases, only 512 LSTM cells with weight noise can match the performance of large un-regularized GRU cells with batch-normalization, and may use a convolutional front-end. In short hand, [2x2D-Conv (2), 3x2560 GRU] represents a stack of 2 layers of 2D-convolution followed by a stack of 3 bidirectional ReLU GRU. “(2)” represents that the layer downsamples the input by 2 along the time dimension. In short hand, the best CTC model is [2x2D-Conv (2), 3x2560 GRU], the best RNN-Transducer’s encoder is [2x2D-Conv (2), 4x2048 GRU] and decoder is [3x1024 Fwd-GRU]. The best attention model works best without a convolutional front-end, the encoder is [4x2560 GRU (4)] and the decoder is [1x512 Fwd-GRU]. All models therefore have about 120M parameters. All models were trained with a minibatch of 512 on 16 M40 gpus using synchronous SGD, and typically converge within 70k iterations to the final solution.
4 Impact of encoder architecture
In this section, we use the standard WSJ dataset to understand how the models perform with different encoding choices. Since encoder layers are far away from the loss functions we are evaluating, one expect that an encoder that works well on CTC would also perform well on attention and RNN-Transducer. However, different training targets allow for different kinds of encoders: particularly, 1) the amount of downsampling in the encoder is an important factor that impacts both training wall clock time as well as the accuracy of the model. 2) Encoders with forward-only layers also allow for streaming decoding, so we also explore that aspect. We believe that these results on the smaller and more uniform dataset should still hold at scale, and therefore focus on the trends rather than optimizing for WER.
We control all the models in this section to have 4 layers of 256 bidirectional LSTM cells in the encoder, with weight noise. We perform random search over pooling in the encoder, whether to use a convolutional front-end, data augmentation, weight noise and optimization hyper-parameters. We report the best numbers within the first 60k iterations of training 666Better results are observed for all models if they are trained for 400k iterations - e.g, a WER of 15.72 for Attention model after beam search on the WSJ dev’93 set - but the conclusions of comparison remain unchanged.. This search over hyper-parameter space has allowed us to match previously published results. The attention model in Table 5 has a WER of 17.4 after beam search on the WSJ dev’93 set, which matches the previously published results (17.9) in . Similarly, the CTC model has better results than reported in . We therefore believe that this provides a good baseline to explore the trade-offs in modeling choices.
4.1 Forward-only encoders
Streaming transcription is an important requirement for ASR models. The first step towards deploying these models in this setting is to replace the bidirectional layers with forward-only recurrent layers. Note that while this immediately makes CTC and RNN-Transducer models deployable, attention models still need to be able to process the entire utterance before outputting the first character. Alternatives have been proposed to circumvent this issue [22, 1] and build attention models with monotonic attention and streaming decoders, but none of them are able to completely match the performance of the full attention models. Nevertheless, we believe a comparison with models with full attention is important for us to find out if full attention over the entire audio provides additional performance or improves training. In our experiment, we replace every layer of 256 bidirectional LSTM cells in the encoder with a layer of 512 forward-only LSTM cells.
|No LM||+ LM||+ LM|
From Table 5, we find that CTC models are significantly more stable, easier to train and perform better in the forward only setting. Also, since the attention models are quite a bit better than RNN-Transducer models, the full attention over all encoder time steps seems to be valuable.
4.2 Downsampling in the encoder
One effective way to control both the memory usage as well as the training time of these models is to compress along the time dimension in the encoder, so that the recurrent layers are unrolled over fewer time-steps. Previous results have shown that CTC models work best at 50 steps per second of audio  (a reduction since spectrograms are often made at 100 steps per second of audio), and attention models work best at about 12 steps per second of audio . So given the same encoder architecture, the final encoder layer on an attention model with 3 layers of pyramidal pooling has lesser compute when compared to a CTC model. This is important since the attention now only needs to be computed over such a small number of encoder time steps.
Since RNN-Transducers and attention models can output multiple characters for the same encoder timestep, we expect RNN-Transducers to be as robust as attention models as we increase the amount of pooling in the encoder. While Figure 5 shows that they are fairly robust compared the CTC models, we find that attention models are significantly more robust. In addition, we have successfully trained attention models with up to 5 layers of pooling - reduction in the encoder which forces to compress one second of audio into only 3 encoder steps.
5 Alignment Visualization
The three transduction models formulate the alignments between input and output in different ways. CTC and RNN-Transducer models explicitly treat alignment as a latent variable and marginalize over all possible hard alignments while attention models a soft alignment between each output step and every input step. In addition, RNN-Transducer and Attention models allow for producing multiple characters by reading the same input locations while CTC can only produce one.
Herein, we visualize the alignments learned by three models to understand the formulations made by each model. Figure 6 plots the alignment for one utterance from the WSJ devset. Since the alignment is computed based on ground-truth text (instead of predictions), all three models produce reasonable alignments, especially being monotonic for Attention. Several notable observations are listed as below:
We can see the small jumps along x-axis in the left subfigure, as CTC inserts blanks into output labels in order to align with inputs.
Multiple attending (producing characters) along the same input (the same column) can be found in RNN-Transducer (middle) and Attention (right) models.
The alignments computed by CTC and RNN-Transducer are more concentrated (or peaky) compared to that of Attention. In addition, Attention model produces diffused distributions at the beginning of the audio.
6 Related Work
Segmental RNNs  provide another alternative way to model the ASR task. Segmental RNNs model using a zeroth-order CRF. While global normalization help address the label bias issues in CTC, we believe that the bigger issue is still the conditional independence assumptions made by both CTC and Segmental RNNs.
[5, 8, 3] directly compare the WERs of attention models with those of CTC and RNN-transducer listed in the original papers, without any control in either acoustic models or optimization methodology.  did an initial controlled comparison over several speech transduction models, but only present results on a small datset - TIMIT.
There is also some recent effort [22, 1] in introducing local and monotonic constraints into attention models especially for online applications. These efforts will in theory bridge the modelling assumptions between attention and RNN-transducer models. With these constraints, the fitting capability of attention models would be limited, but they might be more robust to noisy test data in return. In other words, attention models can work without extra tricks during beam search decoding, e.g, , coverage penalty.
7 Conclusion and Future Work
We present a thorough comparison of three popular models for the end-to-end ASR task at scale, and find that in the bidirectional setting, all three models perform roughly the same. However, these models differ in the simplicity of their training and decoding pipelines. Notably, end-to-end models trained with the CTC loss, simplify the training process but still require to be decoded with large language models. RNN-Transducers and Attention also simplify the decoding process and require the language models to be introduced only in a post processing stage to be equally if not more effective. Between these two, RNN-Transducers have the simplest decoding process with no extra hyper-parameters tuning for decoding, which leads us to believe that RNN-Transducers present the next generation of end-to-end speech models. In attempt to train RNN-Transducer models with the streaming constraint, and in reducing computation in encoder layers, we find that CTC and attention models still have strengths that we aim to leverage in our future work with RNN-Transducers.
We would like to thank Xiangang Li, of the Baidu Speech Technology Group for feedback about the work and also helping improve the draft.
-  Roee Aharoni and Yoav Goldberg. Sequence to sequence transduction with hard monotonic attention. arXiv preprint arXiv:1611.01487, 2016.
-  Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
-  Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end attention-based large vocabulary speech recognition. abs/1508.04395, 2015. http://arxiv.org/abs/1508.04395.
-  Eric Battenberg, Rewon Child, Adam Coates, Christopher Fougner, Yashesh Gaur, Jiaji Huang, Heewoo Jun, Ajay Kannan, Markus Kliegl, Atul Kumar, et al. Reducing bias in production speech models. arXiv preprint arXiv:1705.04400, 2017.
-  William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. Listen, attend, and spell. abs/1508.01211, 2015. http://arxiv.org/abs/1508.01211.
-  William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015.
-  Chung-Cheng Chiu, Dieterich Lawson, Yuping Luo, George Tucker, Kevin Swersky, Ilya Sutskever, and Navdeep Jaitly. An online sequence-to-sequence model for noisy speech recognition. arXiv preprint arXiv:1706.06428, 2017.
-  Jan Chorowski, Dzmitry Bahdanau, Dmitry Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. abs/1506.07503, 2015. http://arxiv.org/abs/1506.07503.
-  Jan Chorowski and Navdeep Jaitly. Towards better decoding and language model integration in sequence to sequence models. arXiv preprint arXiv:1612.02695, 2016.
-  Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve. Wav2letter: an end-to-end convnet-based speech recognition system. arXiv preprint arXiv:1609.03193, 2016.
-  Alex Graves. Sequence transduction with recurrent neural networks. arXiv preprint arXiv:1211.3711, 2012.
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen
Connectionist temporal classification: labelling unsegmented sequence
data with recurrent neural networks.
Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM, 2006.
-  Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1764–1772, 2014.
-  Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, 2013.
-  Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, and Andrew Y. Ng. First-pass large vocabulary continuous speech recognition using bi-directional recurrent DNNs. abs/1408.2873, 2014. http://arxiv.org/abs/1408.2873.
-  G.E. Hinton, L. Deng, D. Yu, G.E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine, 29(November):82–97, 2012.
-  Hairong Liu, Zhenyao Zhu, Xiangang Li, and Sanjeev Satheesh. Gram-ctc: Automatic unit selection and target decomposition for sequence labelling. CoRR, abs/1703.00096, 2017.
-  Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, and Steve Renals. Segmental recurrent neural networks for end-to-end speech recognition. In INTERSPEECH, 2016.
-  Yajie Miao, Mohammad Gowayyed, and Florian Metze. Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. In Automatic Speech Recognition and Understanding (ASRU), 2015 IEEE Workshop on, pages 167–174. IEEE, 2015.
-  D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, K. Veselý, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, and G. Stemmer. The Kaldi speech recognition toolkit. In ASRU, 2011.
-  Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. Purely sequence-trained neural networks for asr based on lattice-free mmi. In INTERSPEECH, pages 2751–2755, 2016.
-  Colin Raffel, Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. Online and linear-time attention by enforcing monotonic alignments. arXiv preprint arXiv:1704.00784, 2017.
-  George Saon, Gakuto Kurata, Tom Sercu, Kartik Audhkhasi, Samuel Thomas, Dimitrios Dimitriadis, Xiaodong Cui, Bhuvana Ramabhadran, Michael Picheny, Lynn-Li Lim, et al. English conversational telephone speech recognition by humans and machines. arXiv preprint arXiv:1703.02136, 2017.
-  Andrew W. Senior, Hasim Sak, Felix de Chaumont Quitry, Tara N. Sainath, and Kanishka Rao. Acoustic modelling with cd-ctc-smbr lstm rnns. In ASRU, 2015.
-  Tom Sercu and Vaibhava Goel. Dense prediction on sequences with time-dilated convolutions for speech recognition. arXiv preprint arXiv:1611.09288, 2016.
-  Jason R Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. Dirt cheap web-scale parallel text from the common crawl. In ACL (1), pages 1374–1383, 2013.
-  Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
-  Wayne Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. Achieving human parity in conversational speech recognition. arXiv preprint arXiv:1610.05256, 2016.
-  Geoffery Zweig, Ghengzhu Yu, Jasha Droppo, and Andreas Stolcke. Advances in all-neural speech recognition. arXiv preprint arXiv:1609.05935, 2016.