1 Introduction
Endtoend models for tasks such as automatic speech recognition require the use of either structured loss functions like Connectionist Temporal Classification
(Graves et al., 2006) or unstructured models like sequencetosequence (Sutskever et al., 2014) which leverage an attention mechanism (Bahdanau et al., 2014) to learn an implicit alignment. Both these types of models suffer from an exposure bias and a label bias problem.Exposurebias results from the mismatch between how these models are trained and how they are used at inference (Ranzato et al., 2016; Wiseman and Rush, 2016; Baskar et al., 2018)
. While training, the model is never exposed to its own mistakes since it uses the groundtruth target as guidance. At inference the target is unavailable and a beam search is typically used, often with the constraints of either a lexicon, a transition model, or both. The model must rely on past predictions to inform future decisions and may not be robust to even minor mistakes.
Labelbias occurs when locally normalized transition scores are used (LeCun et al., 1998; Lafferty et al., 2001; Bottou and LeCun, 2005). Local normalization causes outgoing transitions at a given timestep to compete with one another. A symptom of this is that it can be difficult for the model to revise its past predictions based on future observations. As a consequence, globally normalized models are strictly more powerful than locally normalized models (Andor et al., 2016).
In this work we build a fully differentiable beam search decoder (DBD) which can be used to efficiently approximate a powerful discriminative model without either exposure bias or label bias. We alleviate exposure bias by performing a beam search which includes the constraints from a lexicon and transition models. In fact, DBD can also jointly train multiple models operating at different granularities (e.g. token and wordlevel language models). The DBD avoids the label bias problem since it uses unnormalized scores and combines these scores into a global (sequencelevel) normalization term.
Our differentiable decoder can handle unaligned input and output sequences where multiple alignments exist for a given output. The DBD model also seamlessly and efficiently trains tokenlevel, wordlevel or other granularity transition models on their own or in conjunction. Connectionist Temporal Classification (CTC) (Graves et al., 2006) can handle unaligned sequences and is commonly used in automatic speech recognition (ASR) (Amodei et al., 2016) and other sequence labeling tasks (Liwicki et al., 2007; Huang et al., 2016). The Auto Segmentation criterion (ASG) (Collobert et al., 2016) can also deal with unaligned sequences. However, neither of these criteria allow for joint training of arbitrary transition models.
Because DBD learns how to aggregate scores from various models at training time, it avoids the need for a grid search over decoding parameters (e.g. language model weights and word insertion terms) on a heldout validation set at test time. Furthermore, all of the scores used by the differentiable decoder are unnormalized and we thus discard the need for costly normalization terms over large vocabulary sizes. In fact, when using DBD, training a language model with a twomillion word vocabulary instead of a twothousand word vocabulary would incur little additional cost.
We apply DBD to the task of automatic speech recognition and show competitive performance on the Wall Street Journal (WSJ) corpus (Paul and Baker, 1992). Compared to other baselines which only use the acoustic data and transcriptions, our model achieves word error rates which are comparable to stateoftheart. We also show that DBD enables much smaller beam sizes and smaller and simpler models while achieving lower error rates. This is crucial, for example, in deploying models with tight latency and throughput constraints.
In the following section we give a description of the exact discriminative model we wish to learn and in Sections 3 and 4 show how a differentiable beam search can be used to efficiently approximate this model. In Section 4 we also explain the target sequencetracking technique which is critical to the success of DBD. We explain how DBD can be applied to the task of ASR in Section 5 along with a description of the acoustic and language models (LMs) we consider. Section 6 describes our experimental setup and results on the WSJ speech recognition task. In Section 7 we put DBD in context with prior work and conclude in Section 8.
2 Model
In the following, we consider an input sequence , and its corresponding target sequence . We also denote a tokenlevel alignment over the acoustic frames as . An alignment leading to the target sequence of words is denoted as . The conditional likelihood of given the input is then obtained by marginalizing over all possible alignments leading to :
(1) 
In the following, we consider a scoring function which outputs scores for each frame and each label in a token set. We also consider a token transition model (we stick to a bigram for the sake of simplicity) and a word transition model . Given an input sequence , an alignment is assigned an unnormalized score obtained by summing up frame scores, tokenlevel transition scores along and the wordlevel transition model score:
(2) 
It is important to note that the frame scores and transition scores are all unnormalized. Hence, we do not require any transition model weighting as the model will learn the appropriate scaling. Also, in Equation (1) is obtained by performing a sequencelevel normalization, applying a softmax over all alignment scores for all possible valid sequences of words ( necessarily include ):
(3) 
Combining Equation (1) and Equation (3), and introducing the operator for convenience, our model can be summarized as:
(4) 
Our goal is to optimize jointly the scoring function and the transition models (tokenlevel and wordlevel) by maximizing this conditional likelihood over all labeled pairs available at training time. In Equation (4), it is unfortunately intractable to exactly compute the over all possible sequences of valid words. In the next section, we will relate this likelihood to what is computed during decoding at inference and then show how it can be approximated efficiently. In Section 4, we will show how it can be efficiently optimized.
3 Decoding
At inference, given an input sequence , one needs to find the best corresponding word sequence . A popular decoding approach is to define the problem formally as finding , implemented as a Viterbi search. However, this approach takes in account only the best alignment leading to . Keeping in mind the normalization in Equation (4), and following the footsteps of (Bottou, 1991), we are interested instead in finding the which maximizes the Forward score:
(5) 
The first derivation in Equation (5) is obtained by plugging in Equation (4) and noticing that the normalization term is constant with respect to . As the search over all possible sequences of words is intractable, one performs a beam search, which results in a final set of hypotheses . For each hypothesis in the beam (), note that only the most promising alignments leading to this hypothesis will also be in the beam (); in contrast to pure Viterbi decoding, these alignments are aggregated through a operation instead of a operation.
Our beam search decoder uses a word lexicon (implemented as a trie converting sequences of tokens into words) to constrain the search to only valid sequences of words . The decoder tracks hypotheses with the highest scores by bookkeeping tuples of “(lexicon state, transition model state, score)” as it iterates through time. At each time step, hypotheses with the same transition model state and lexicon state are merged into the top scoring hypothesis with this state. The score of the resulting hypothesis is the of the combined hypotheses.
3.1 Decoding to compute the likelihood normalization
The normalization term computed over all possible valid sequence of words in the conditional likelihood Equation (4) can be efficiently approximated by the decoder, subject to a minor modification.
(6) 
where is the set of hypotheses retained by the decoder beam. Compared to Equation (5), the only change in Equation (6) is the final “aggregation” of the hypotheses in the beam: at inference, one performs a operation, while to compute the likelihood normalization one performs a .
4 Differentiable Decoding
As mentioned in Section 2, we aim to jointly train the scoring function and the transition models (tokenlevel and wordlevel) by maximizing the conditional likelihood in Equation (4) over all training pairs . Leveraging the decoder to approximate the normalization term (with Equation (6)), and denoting and for all alignments leading to the target and all alignments in the beam respectively, this corresponds to maximizing:
(7) 
This approximated conditional likelihood can be computed and differentiated in a tractable manner. The can be exactly evaluated via dynamic programming (the Forward algorithm). The normalization term is computed efficiently with the decoder. Interestingly, both of these procedures (the Forward algorithm and decoding) only invoke additive, multiplicative and operations in an iterative manner. In that respect, everything is fully differentiable, and one can compute the first derivatives , ,
by backpropagating through the Forward and decoder iterative processes.
4.1 On the normalization approximation
In practice, the normalization approximation shown in Equation (7) is unfortunately inadequate. Indeed, the beam may or may not contain some (or all) character alignments found in . We observe that models with improper normalization fail to converge as everything (scoring function and transition models) outputs unnormalized scores. We correct the normalization by considering instead the over . This quantity can be efficiently computed using the following observation:
(8) 
The terms over and are given by the decoder and the Forward algorithm respectively. The term over can also be computed by the decoder by tracking alignments in the beam which correspond to the ground truth . While this adds extra complexity to the decoder, it is an essential feature for successful training.
4.2 On the implementation
Our experience shows that implementing an efficient differentiable version of the decoder is tricky. First, it is easy to miss a term in the gradient given the complexity of the decoding procedure. It is also difficult to check the accuracy of the gradients by finite differences (and thus hard to find mistakes) given the number of operations involved in a typical decoding pass. To ensure the correctness of our derivation, we first implemented a custom C++ autograd (operating on scalars, as there are no vector operations in the decoder). We then designed a custom version of the differentiable decoder (about 10
faster than the autograd version) which limits memory allocation and checked the correctness of the gradients via the autograd version.5 Application to Speech Recognition
In a speech recognition framework, the input sequence is an acoustic utterance, and the target sequence is the corresponding word transcription. Working at the word level is challenging, as corpora are usually not large enough to model rare words properly. Also, some words in the validation or test sets may not be present at training time. Instead of modeling words, one considers subword units – like phonemes, contextdependent phonemes, or characters. In this work, we use characters for simplicity. Given an acoustic sequence , characterlevel alignments corresponding to the word transcription are . The correct characterlevel alignment is unknown. The scoring function is an acoustic model predicting character scores at each frame of the utterance . The transition model learns a characterlevel language model, and the word transition model is a word language model.
Both the acoustic and language models can be customized in our approach. We use a simple ConvNet architecture (leading to a reasonable endtoend word error rate performance) for the acoustic model, and experimented with a few different language models. We now describe these models in more details.
5.1 Acoustic Model
We consider a 1D ConvNet for the acoustic model (AM), with Gated Linear Units (GLUs) (Dauphin et al., 2017) for the transfer function and dropout as regularization. Given an input , the layer of a Gated ConvNet computes
where , and , are trainable parameters of two different convolutions. As
is the sigmoid function, and
is the elementwise product between matrices, GLUs can be viewed as a gating mechanism to train deeper networks. Gated ConvNets have been shown to perform well on a number of tasks, including speech recognition (Liptchinsky et al., 2017). As the differentiable decoder requires heavy compute, we bootstrapped the training of the acoustic model with ASG. The ASG criterion is similar to Equation (3) but the normalization term is taken over all sequences of tokens(9) 
with the alignment score is given by
(10) 
which does not include a word language model.
5.2 Language Models
The character language model as shown in Equation (2) was chosen to be a simple trainable scalar . We experimented with several word language models:

A zero language model . This special case is a way to evaluate how knowing the lexicon can help the acoustic model training. Indeed, even when there is no language model information, the normalization shown in Equation (4) still takes in account the available lexicon. Only sequences of letters leading to a valid sequence of words are considered (compared to any sequence of letters, as in ASG or LFMMI).

A pretrained language model, possibly on data not available for the acoustic model training. We considered in this case
(11) where is the pretrained language model. The language model weight and word insertion score are parameters trained jointly with the acoustic model.

A bilinear language model. Denoting the sequence of words , we consider the unnormalized language model score:
(12) where is the order of the language model. The word embeddings ( to be chosen) and the projection matrices are trained jointly with the acoustic model. It is worth mentioning that the absence of normalization makes this particular language model efficient.
6 Experiments
Performance of the 10M parameters ConvNet, with respect to training epochs over WSJ. ASG baseline valid error is shown CER (a) and WER (b). Each ASG WER was obtained by running a full grid search (beam size 8000). DBD train (c) and valid (d) WER: three DBD runs were forked from the ASG baseline at
epochs 150, 250 and 500 (see colors/shapes) and trained with beam size 500.We performed experiments with WSJ (about h of transcribed audio data). We consider the standard subsets si284, nov93dev and nov92
for training, validation and test, respectively. We use logmel filterbanks as features fed to the acoustic model, with 40 filters of size 25ms, strided by 10ms. We consider an endtoend setup, where the token set
(see Section 2) includes English letters (az), the apostrophe and the period character, as well as a space character, leading to 29 different tokens. No data augmentation or speaker adaptation was performed. As WSJ ships with both acoustic training data and languagemodel specific training data, we consider two different training setups:
Language models are pretrained (see Equation (11)) with the full available language model data. This allows us to demonstrate that our approach can tune automatically the language model weight and leverage the language model information during the training of the acoustic model.

Both acoustic and language models are trained with acoustic (and corresponding transcription) data only. This allows us to compare with other endtoend work where only the acoustic data was used.
Pretrained language models are ngram models trained with KenLM
(Heafield, 2011). The word dictionary contains words from both the acoustic and language model data. We did not perform any thresholding, leading to about 160K distinct words.All the models are trained with stochastic gradient descent (SGD), enhanced with gradient clipping
(Pascanu et al., 2013) and weight normalization (Salimans and Kingma, 2016). In our experience, these two improvements over vanilla SGD allow higher learning rates, and lead to faster and more stable convergence. Without weight normalization we found GLUConvNets very challenging to train. We use batch training (16 utterances at once), sorting inputs by length prior to batching for efficiency. Both the neural network acoustic model and the ASG criterion run on a single GPU. The DBD criterion is CPUonly. With ASG, a single training epoch over WSJ takes just a few minutes, while it takes about an hour with DBD.6.1 Leveraging LanguageModel Data
In speech recognition, it is typical to train the acoustic model and the language model separately. The language model can take advantage of large textonly corpora. At inference, both models are combined through the decoding procedure (maximizing Equation (5)). Hyperparameters combining the language model (as in Equation (11)) are tuned through a validation procedure (e.g. gridsearch).
We first performed an extensive architecture search, training models with ASG and selecting a baseline model as the one leading to the best validation character error rate (CER). Our best ConvNet acoustic model has 10M parameters and an overall receptive field of 1350ms. A 4gram language model was trained over the text data provided with WSJ. A decoding grid search was then performed at several points during the acoustic model training Figure (b)b). While CER and WER are correlated, the best WER did not correspond to the best CER (see Figure (a)a and Figure (b)b).
Model  nov93dev  nov92 
ASG 10M AM (beam size 8000)  8.5  5.6 
ASG 10M AM (beam size 500)  8.9  5.7 
ASG 7.5M AM (beam size 8000)  8.8  6.0 
ASG 7.5M AM (beam size 500)  9.4  6.1 
DBD 10M AM (beam size 500)  8.7  5.9 
DBD 7.5M AM (beam size 500)  7.7  5.3 
DBD 7.5M AM (beam size 1000)  7.7  5.1 
Attention RNN+CTC  9.3  
(3gram) (Bahdanau et al., 2016a)  
CNN+ASG  9.5  5.6 
(4gram) (Zeghidour et al., 2018)  
CNN+ASG (wav+convLM)  6.8  3.5 
(Zeghidour et al., 2018)  
RNN+E2ELFMMI (data augm.)  4.1  
(+RNNLM) (Hadian et al., 2018)  
BLSTM+PAPB+CE  3.8  
(RNNLM) (Baskar et al., 2018) 
As DBD training is timeconsuming compared to ASGtraining, we bootstrapped several DBD models from three different checkpoints of our ASG model (at epoch 125, 250 and 500). With DBD, the acoustic model is jointly finetuned with the weights of the language model shown in Equation (11). Figure (c)c and Figure (d)d show the training and validation WER with respect to number of epochs over WSJ. DBD converges quickly from the pretrained ASG model, while many epochs (and a gridsearch for the language model hyperparameters) are required to match the same WER with ASG. When starting from later ASG epochs (250 and 500), DBD badly overfits to the training set.
To mitigate overfitting, we trained a variant of our 10M model where the receptive field was reduced to 870ms (leading to 7.5M parameters). Table 1 summarizes our results. While ASG is unable to match the WER performance of the 1450ms receptive field model, training with DBD leads to better performance, demonstrating the advantage of jointly training the acoustic model with the language model. Not only does DBD allow for more compact acoustic models, but also DBDtrained models require much smaller beam at decoding, which brings a clear speed advantage at inference.
Figure 3 shows the effect of the beam size during DBD training. Having too small a beam leads to model divergence. Training epoch times for beam sizes of 500, 1000 and 2000 are 35min, 90m and 180min respectively. In most experiments, we use a beam size of 500, as larger beam sizes led to marginal WER improvements. Although, in contrast to pipelines suffering from exposure bias (e.g. Koehn and Knowles, 2017), larger beams are always better.
6.2 Experimenting with AcousticOnly Data
Recent work on WSJ has shown that endtoend approaches are good at modeling acoustic data. Some of these works also demonstrated that with architectures powerful enough to capture longrange dependencies, endtoend approaches can also implicitly model language, and push the WER even further down. DBD allows us to explicitly design acoustic and language models while training them jointly. We show in this section that with simple acoustic and language models, we can achieve WERs on par with existing approaches trained on acousticonly data.
Model  nov93dev  nov92 
ASG (zero LM decoding)  18.3  13.2 
ASG (2gram LM decoding)  14.8  11.0 
ASG (4gram LM decoding)  14.7  11.3 
DBD zero LM  16.9  11.6 
DBD 2gram LM  14.6  10.4 
DBD 2grambilinear LM  14.2  10.0 
DBD 4gram LM  13.9  9.9 
DBD 4grambilinear LM  14.0  9.8 
RNN+CTC  30.1  
(Graves and Jaitly, 2014)  
Attention RNN+CTC  18.6  
(Bahdanau et al., 2016a)  
Attention RNN+CTC+TLE  17.6  
(Bahdanau et al., 2016b)  
Attn. RNN+seq2seq+CNN  9.6  
(speaker adapt.) (Chan et al., 2017)  
BLSTM+PAPB+CE  10.8  
(Baskar et al., 2018) 
In Table 2 we report standard baselines for this setup, as well as our own ASG baseline model, decoded with an ngram trained only on acoustic data. We compare with DBDtrained models using the three different language models introduced in Section 5.2: (i) a zero language model, which allows us to leverage the word lexicon while training; (ii) ngram language models, pretrained on acoustic data (where the weighting is trained jointly with the acoustic model) and (iii) bilinear language models (where all the parameters are trained jointly with the acoustic model). Results show that only knowing the lexicon when training the acoustic model already greatly improves the WER over the baseline ASG model, where the lexicon is known only at test time. Jointly training a word language model with the acoustic model further reduces the WER.
7 Related Work
Our differentiable decoder belongs to the class of sequencelevel training criteria, which includes Connectionist Temporal Classification (CTC) (Graves et al., 2006; Graves and Jaitly, 2014) and the Auto Segmentation (ASG) criterion (Collobert et al., 2016), as well as Minimum Bayes Risk (MBR and sMBR) (Goel and Byrne, 2000; Gibson and Hain, 2006; Sak et al., 2015; Prabhavalkar et al., 2018) and the Maximum Mutual Information (MMI) criterion (Bahl et al., 1986), amongst others. MMI and ASG are the closest to our differentiable decoder as they perform global (sequence level) normalization, which should help alleviate the label bias problem (LeCun et al., 1998; Lafferty et al., 2001; Bottou and LeCun, 2005; Andor et al., 2016).
Both MBR and MMI are usually trained after (or mixed with) another sequence loss or a forcealignment phase. MMI maximizes the average mutual information between the observation and its correct transcription . Considering an HMM with states for a given transcription , MMI maximizes:
(13) 
In contrast to MMI, MBR techniques integrate over plausible transcriptions , weighting each candidates by some accuracy to the ground truth – computing .
In a neural network context, one can apply Bayes’ rule to plug in the emission probabilities. Ignoring
, the term in Equation (13) is approximated by , where (the emissions being normalized per frame), andis estimated with the training data. Apart from this approximation, two differences with our differentiable decoder are critical:

MMI considers normalized probabilities for both the acoustic and language model, while we consider unnormalized scores everywhere.

MMI does not jointly train the acoustic and language models. MMI does come in different flavors though, with (fixed) token level (phone) language models and no lexicon, as found in latticefree MMI (LFMMI) (Povey et al., 2016), and even trained endtoend without any alignments (EELFMMI) as in (Hadian et al., 2018), with a pretrained phone LM still.
ASG maximizes Equation (9) which is similar to Equation (4) but with two critical differences: (1) there is no word language model in Equation (2), and (2) the normalization term is not constrained to valid sequences of words but is over all possible sequences of letters, and thus can be computed exactly (as is the case for LFMMI). Unlike ASG, CTC assumes output tokens are conditionally independent given the input and includes an optional blank which makes the graph less regular (Liptchinsky et al., 2017).
Because our work is endtoend, it is also related to seq2seq learning (Sutskever et al., 2014; Bahdanau et al., 2014; Chan et al., 2017; Wiseman and Rush, 2016; Gehring et al., 2017), and in particular training with existing/external language models (Sriram et al., 2018). Closest to our work is (Baskar et al., 2018) that shares a similar motivation, training an acoustic model through beam search although its (1) loss includes an error rate (as MBR), (2) they consider partial hypotheses (promising accurate prefix boosting: PAPB), and in practice (3) they optimize a loss composing this beam search sequence score with the crossentropy (CE).
In NLP, training with a beam search procedure is not new (Collins and Roark, 2004; Daumé III and Marcu, 2005; Wiseman and Rush, 2016). Of those, (Wiseman and Rush, 2016)
is the closest to our work, training a sequence model through a beamsearch with a global sequence score. To our knowledge, we are the first to train through a beam search decoder for speech recognition, where the multiplicity of gold transcription alignments makes the search more complex. Also related, several works are targeting the loss/evaluation mismatch (and sometimes exposure bias) through reinforcement learning (policy gradient)
(Bahdanau et al., 2016b; Ranzato et al., 2016) even in speech recognition (Zhou et al., 2018).Finally, our work makes a generic beam search differentiable endtoend and so relates to relaxing the beam search algorithm itself (e.g. getting a soft beam through a soft argmax) (Goyal et al., 2018), although we use a discrete beam. Compared to differentiable dynamic programming (Mensch and Blondel, 2018; Bahdanau et al., 2016b), we use a where they use a softmax and we keep track of an nbest (beam) set while they use Viterbilike algorithms.
8 Conclusion
We build a fully differentiable beam search decoder which is capable of jointly training a scoring function and arbitrary transition models. The DBD can handle unaligned sequences by considering all possible alignments between the input and the target. Key to this approach is a carefully implemented and highly optimized beam search procedure which includes a novel target sequencetracking mechanism and an efficient gradient computation. As we show, DBD can scale to very long sequences with thousands of timesteps. We are able to perform a full training pass (epoch) through the WSJ data in about halfanhour with a beam size of 500.
We show that the beam search decoder can be used to efficiently approximate a discriminative model which alleviates exposure bias from the mismatch between training and inference. We also avoid the label bias problem by using unnormalized scores and performing a sequencelevel normalization. Furthermore, the use of unnormalized scores allows DBD to avoid expensive local normalizations over large vocabularies.
Since DBD jointly trains the scoring function and the transition models, it does away with the need for decoder hyperparameter tuning on a heldout validation set. We also observe on the WSJ test set that DBD can achieve better WERs at a substantially smaller beam size (500 vs 8000) than a well tuned ASG baseline.
On the WSJ dataset, DBD allows us to train much simpler and smaller acoustic models with better error rates. One reason for this is that DBD can limit the competing outputs to only sequences consisting of valid words in a given lexicon. This frees the acoustic model from needing to assign lower probability to invalid sequences. Including an explicit language model further decreases the burden on the acoustic model since it does not need to learn an implicit language model. We show that models with fewer parameters and half the temporal receptive field can achieve equally good error rates when using DBD.
References

Graves et al. (2006)
Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen
Schmidhuber.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.
InInternational Conference on Machine Learning (ICML)
, pages 369–376, 2006.  Sutskever et al. (2014) Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems (NIPS), pages 3104–3112, 2014.
 Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2014.
 Ranzato et al. (2016) Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In International Conference on Learning Representations (ICLR), 2016.

Wiseman and Rush (2016)
Sam Wiseman and Alexander M. Rush.
Sequencetosequence learning as beamsearch optimization.
In
Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)
, pages 1296–1306. Association for Computational Linguistics, 2016. URL http://aclweb.org/anthology/D161137.  Baskar et al. (2018) Murali Karthick Baskar, Lukáš Burget, Shinji Watanabe, Martin Karafiát, Takaaki Hori, and Jan Honza Černockỳ. Promising accurate prefix boosting for sequencetosequence ASR. arXiv preprint arXiv:1811.02770, 2018.
 LeCun et al. (1998) Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Lafferty et al. (2001) John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning (ICML), pages 282–289, 2001. URL http://dl.acm.org/citation.cfm?id=645530.655813.

Bottou and LeCun (2005)
Léon Bottou and Yann LeCun.
Graph transformer networks for image recognition.
Bulletin of the 55th Biennial Session of the International Statistical Institute (ISI), 2005.  Andor et al. (2016) Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. Globally normalized transitionbased neural networks. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pages 2442–2452. Association for Computational Linguistics, 2016. URL http://aclweb.org/anthology/P161231.
 Amodei et al. (2016) Dario Amodei, Sundaram Ananthanarayanan, Rishita Anubhai, Jingliang Bai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro, Qiang Cheng, Guoliang Chen, et al. Deep speech 2: Endtoend speech recognition in english and mandarin. In International Conference on Machine Learning (ICML), pages 173–182, 2016.

Liwicki et al. (2007)
Marcus Liwicki, Alex Graves, Horst Bunke, and Jürgen Schmidhuber.
A novel approach to online handwriting recognition based on bidirectional long shortterm memory networks.
In Proceedings of International Conference on Document Analysis and Recognition, volume 1, pages 367–371, 2007. URL https://www.cs.toronto.edu/~graves/icdar_2007.pdf.  Huang et al. (2016) DeAn Huang, Li FeiFei, and Juan Carlos Niebles. Connectionist temporal modeling for weakly supervised action labeling. European Conference on Computer Vision (ECCV), pages 137–153, 2016. URL http://arxiv.org/abs/1607.08584.
 Collobert et al. (2016) Ronan Collobert, Christian Puhrsch, and Gabriel Synnaeve. Wav2letter: an endtoend convnetbased speech recognition system. arXiv preprint arXiv:1609.03193, 2016.
 Paul and Baker (1992) Douglas B Paul and Janet M Baker. The design for the wall street journalbased CSR corpus. In Proceedings of the workshop on Speech and Natural Language, pages 357–362. Association for Computational Linguistics, 1992.
 Bottou (1991) Léon Bottou. Une Approche théorique de l’Apprentissage Connexionniste: Applications à la Reconnaissance de la Parole. PhD thesis, Université de Paris XI, Orsay, France, 1991. URL http://leon.bottou.org/papers/bottou91a.
 Dauphin et al. (2017) Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In International Conference on Machine Learning (ICML), pages 933–941, 2017.
 Liptchinsky et al. (2017) V Liptchinsky, G Synnaeve, and R Collobert. Letterbased speech recognition with gated convnets. CoRR, vol. abs/1712.09444, 1, 2017.
 Heafield (2011) Kenneth Heafield. KenLM: faster and smaller language model queries. In Proceedings of the EMNLP Workshop on Statistical Machine Translation, pages 187–197, 2011. URL https://kheafield.com/papers/avenue/kenlm.pdf.
 Pascanu et al. (2013) Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning (ICML), 2013.
 Salimans and Kingma (2016) Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 901–909. 2016.
 Bahdanau et al. (2016a) Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. Endtoend attentionbased large vocabulary speech recognition. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4945–4949. IEEE, 2016a.
 Zeghidour et al. (2018) Neil Zeghidour, Qiantong Xu, Vitaliy Liptchinsky, Nicolas Usunier, Gabriel Synnaeve, and Ronan Collobert. Fully convolutional speech recognition. arXiv preprint arXiv:1812.06864, 2018.
 Hadian et al. (2018) Hossein Hadian, Hossein Sameti, Daniel Povey, and Sanjeev Khudanpur. Endtoend speech recognition using latticefree MMI. In Interspeech, 2018.
 Koehn and Knowles (2017) Philipp Koehn and Rebecca Knowles. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39. Association for Computational Linguistics, 2017. URL http://aclweb.org/anthology/W173204.
 Graves and Jaitly (2014) Alex Graves and Navdeep Jaitly. Towards endtoend speech recognition with recurrent neural networks. In International Conference on Machine Learning (ICML), pages 1764–1772, 2014.
 Bahdanau et al. (2016b) Dzmitry Bahdanau, Dmitriy Serdyuk, Philémon Brakel, Nan Rosemary Ke, Jan Chorowski, Aaron Courville, and Yoshua Bengio. Task loss estimation for sequence prediction. In International Conference on Learning Representations (ICLR) Workshop, 2016b.
 Chan et al. (2017) William Chan, Yu Zhang, Quoc Le, and Navdeep Jaitly. Latent sequence decompositions. In International Conference on Learning Representations (ICLR), 2017.
 Goel and Byrne (2000) Vaibhava Goel and William J Byrne. Minimum bayesrisk automatic speech recognition. Computer Speech & Language, 14(2):115–135, 2000.
 Gibson and Hain (2006) Matthew Gibson and Thomas Hain. Hypothesis spaces for minimum bayes risk training in large vocabulary speech recognition. In Interspeech, pages 2–4, 2006.
 Sak et al. (2015) Haşim Sak, Andrew Senior, Kanishka Rao, and Françoise Beaufays. Fast and accurate recurrent neural network acoustic models for speech recognition. arXiv preprint arXiv:1507.06947, 2015.
 Prabhavalkar et al. (2018) Rohit Prabhavalkar, Tara N Sainath, Yonghui Wu, Patrick Nguyen, Zhifeng Chen, ChungCheng Chiu, and Anjuli Kannan. Minimum word error rate training for attentionbased sequencetosequence models. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4839–4843. IEEE, 2018.

Bahl et al. (1986)
Lalit Bahl, Peter Brown, Peter De Souza, and Robert Mercer.
Maximum mutual information estimation of hidden markov model parameters for speech recognition.
In Acoustics, Speech and Signal Processing (ICASSP), volume 11, pages 49–52. IEEE, 1986.  Povey et al. (2016) Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, and Sanjeev Khudanpur. Purely sequencetrained neural networks for ASR based on latticefree MMI. In Interspeech, pages 2751–2755, 2016.
 Gehring et al. (2017) Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning (ICML), pages 1243–1252, 2017. URL http://proceedings.mlr.press/v70/gehring17a.html.
 Sriram et al. (2018) Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. Cold fusion: Training seq2seq models together with language models. In Interspeech, pages 387–391, 2018.

Collins and Roark (2004)
Michael Collins and Brian Roark.
Incremental parsing with the perceptron algorithm.
In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), 2004. URL http://aclweb.org/anthology/P041015.  Daumé III and Marcu (2005) Hal Daumé III and Daniel Marcu. Learning as search optimization: Approximate large margin methods for structured prediction. In International Conference on Machine Learning (ICML), pages 169–176, 2005.
 Zhou et al. (2018) Yingbo Zhou, Caiming Xiong, and Richard Socher. Improving endtoend speech recognition with policy learning. In International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5819–5823. IEEE, 2018.

Goyal et al. (2018)
Kartik Goyal, Graham Neubig, Chris Dyer, and Taylor BergKirkpatrick.
A continuous relaxation of beam search for endtoend training of
neural sequence models.
In
ThirtySecond AAAI Conference on Artificial Intelligence
, 2018.  Mensch and Blondel (2018) Arthur Mensch and Mathieu Blondel. Differentiable dynamic programming for structured prediction and attention. In International Conference on Machine Learning (ICML), 2018.
Comments
There are no comments yet.