Text recognition in natural scene images has recently attracted much research interest of the computer vision community[18, 25, 31]. The sequence-learning-based text recognition techniques, which have been advancing rapidly in recent years [7, 25, 32]
, generally consist of an encoding module and a decoding module. The encoding module usually encodes the input images to vectors of fixed dimensionality with a certain encoding technique, such as convolution neural network (CNN)16, 31, 34] and gate recurrent neural network (GRU) [4, 8, 32]. While the decoding module decodes the encoded feature vectors into the target strings by exploiting RNN, connectionist temporal classification (CTC) [12, 31] or attention mechanism [3, 5, 32] etc.
The state-of-the-art of scene text recognition is the attention-based encoder-decoder framework [7, 25, 32]. It outputs a sequence of probability distribution (pd) that is expected to be aligned with the characters of the text in the input image. In model training, the probability of the ground-truth text (gt in short), calculated by the corresponding output pd sequence in a frame-wise style, is utilized to estimate the likelihood of model parameters. In this paper, we call this joint probability frame-wise probability (FP), and the existing frame-wise loss based methods FP based methods in the sequel.
However, in both the training and predicting processes of the attention-based text recognition models, some characters may be missing or superfluous, which results in misalignment between the gt and the output sequence of pd. In a recent work, Cheng et al.  considered this phenomenon as a result of attention drift, and solved it by introducing the Focusing Attention Net (FAN). This method achieved the state of the art performance. But the training of FAN requires extra pixel-wise supervising information, which is expensive to provide. And the training process is time-consuming due to the large amount of pixel-wise calculation.
Fig. 1 provides examples to illustrate the phenomenon of missing and superfluous characters in training an attention-based text recognition model on the ground truth “DOVE#”. Here, ‘#’ represents the End-Of-Sequence (EOS) symbol, which is commonly used in attention-based methods [7, 25, 32]. In Fig. 1 (a) and (b), the model may recognize the inputs as “DVE#” and “DOOVE#” respectively, based on the output sequences. Comparing against the “DOVE#”, it is natural to say that the former misses an ‘O’ and the latter has a superfluous ‘O’.
By checking the training process of attention-based text recognition models, we can see that the FP-based methods simply train the model by maximizing the probability of each character in the gt, based on the corresponding pd of the attention’s output sequence. However, the misalignments caused by missing or superfluous characters may confuse and mislead the training process, and consequently make the training costly and degrade the recognition accuracy. Concretely, back to Fig. 1, except for ‘D’ and ‘#’, all the other three characters ‘O’, ‘V’ and ‘E’ have low probability in the corresponding pds (see the left diagrams), thus large error will be back-propagated to these pds for further training. Instead, if the training algorithm realizes that the character ‘O’ in Fig. 1 (a) is missing and one of the character ‘O’ in Fig. 1 (b) is superfluous, it will align ‘V’ and ‘E’ in Fig. 1 (a), ‘V’, ‘E’, and ‘#’ in Fig. 1 (b) to more appropriate pds (see the right diagrams), and the characters will have higher probability under the new alignment, then it needs only to focus on the missing/superfluous character ‘O’, which will make the training simpler and less costly.
Motivated by the observation above and inspired by the concept of sequence alignment where edit distance is used to measure the dissimilarity of two sequences, in this paper we propose a new method for scene text recognition under the attention-based encoder-decoder framework, which is called edit probability (EP in short). By treating the misalignment between the and the output sequence as the result of possible occurrences of missing/superfluous characters, in the training process, EP tries to effectively estimate the probability of a string conditioned on the output sequence of pd under certain model parameters while considering possible occurrences of missing/superfluous characters. The merit lies in that the training process can focus on the missing, superfluous and misclassified characters, and the impact of misalignment can be reduced substantially. To validate the proposed method, we conduct extensive experiments on several benchmarks, which show that EP can significantly boost recognition performance.
2 Related Work
In the past decade, many methods have been proposed for scene text recognition, which roughly fall into three types: 1) traditional methods with handcrafted features, 2) Naïve deep neural-network-based methods, and 3) sequence-based methods.
In early years, traditional methods first extract handcrafted visual features for individual character detection and recognition one by one, then integrate these characters into words based on a set of heuristic rules or a language model. Neumann and Matas35, 36] first trained a character classifier with the extracted HOG descriptors , then recognized characters of the cropped word image by a sliding window one by one. However, due to the low representation capability of handcrafted features, these methods cannot achieve satisfactory recognition performance.
Later, instead of handcrafted features, some deep neural-network-based methods were developed for extracting robust visual features. Bissacco et al. 
adopted a fully connected network (FC) of 5 hidden layers for extracting character features, then applied an n-gram language model to recognize characters. Wanget al.  and Jaderberg et al. [19, 20] first developed CNN-based framework for character feature representation, then applied some heuristic rules for characters generation. These naïve deep neural-network-based methods above usually recognized character sequence based on some pre/post-processing, such as the segmentation of each character or a non-maximum suppression, which may be very challenging because of the complicated background and the inadequate distance between consecutive characters.
Recently, some researchers treated the text recognition task as a sequence learning problem: first encoding a text image into a sequence of features with deep neural network, then directly generating character sequence with sequence recognition techniques. He et al.  and Shi et al.  proposed the end-to-end neural networks that first capture visual feature representation by using CNN or RNN, then the CTC  loss was combined with the neural network outputs for calculating the conditional probability between the predicted and the target sequences. The state of the art for text recognition is the attention-based methods [7, 25, 32]. These methods first combined CNN and RNN for encoding text images into feature representations, then employed a frame-wise loss to optimize the model. In the training process, the misalignment between the sequence and the output sequence may mislead the training algorithm and result in poor performance.
Note that the misalignment problem has also been observed in attention training of speech recognition. Kim et al
. tried to solve the problem by using a joint CTC-attention model within the multi-task learning framework. However, as pointed in , the joint CTC-attention model does not work well in scene text recognition. This paper also addresses the scene text recognition problem under the attention-based framework. Different from the existing methods, we propose a novel method EP that tries to estimate the probability of a string conditioned on the input image, by treating the misalignment between the text and the output sequence as the result of possible occurrences of missing/superfluous characters. EP provides an effective way to handle the misalignment problem, and empirically it outperforms the existing methods.
3 The EP Method
In this section, we present the EP method in detail, including the EP-based attention decoder, the formulation of EP, the EP training process, and EP based prediction with/without a lexicon.
Edit probability is proposed to effectively train attention-based models for accurate scene text recognition. Conceptually, for an image and a text string , measures the probability of conditioned on under model parameters . It is evaluated by summing the probabilities of all possible edit paths that transform an initially empty string to based on the pd sequence generated by the model. And each edit path consists of a sequence of edit operations that are detailed in Sec. 3.2.
3.1 EP-based Attention Decoder
The original attention decoder is an RNN that generates the output pd on the -th step :
where is a sequence of encoded feature vectors, and , , and represent the LSTM  hidden state, the weighted sum of , the attention weights and the alignment model on the -th step, respectively. , , , and are all trainable parameters.
In this work, for EP calculation, the attention decoder also generates and on the -th step:
where and are trainable parameters. , and respectively represents the probability of being correctly aligned, a character being missing before and being superfluous. And is the pd of characters being missing before conditioned on .
3.2 Edit Probability
Formally, with the alphabet (including the EOS) , let where represents the set of all valid strings (each of which contains one and only one EOS as its end token) on . We define the edit states as tuples for and . And the state indicates generating from . In particular, and represent an empty string (also denoted by “”) and an empty pd sequence respectively. The edit operations for state are defined as follows:
consumption: the consumption operation transforms state to state if and by regarding the character and the pd as being correctly aligned, and appending to by consuming . Formally,
The probability of this operation is the joint probability of doing consumption and consuming of . That is,
deletion: the deletion operation transforms state to state if by regarding the pd as being superfluous, and deleting directly. Formally,
For , the probability of this operation is . And for , deletion is the only allowed operation on state as any character after the EOS is ignored. So we have
insertion: the insertion operation transforms state to state if by regarding as a missing character, and appending to directly. Formally,
For , the probability of this operation is the joint probability of a character is missed from the position just before and the missing character is . And for , insertion is the only allowed operation over state as there is no more pd to delete or consume. So we have
By assuming that the edit operations over and are conditional independent, the probability of an edit path is the joint probability of all the edit operations in . That is,
where refers to the -th edit operation in . In particular, the probability of an empty path is .
The edit probability of states is evaluated by the sum of all the conditional probabilities of edit paths that transform to where “” and represent an empty string and an empty pd sequence respectively. Formally,
where and is the set of all edit operations over and . That is,
However, enumerating all the possible edit paths is prohibitively expensive (if not impossible) as the search space is too large. Fortunately, this can be solved by dynamic programming based on the following equation inferred from Eq. 9:
for and . And represents the concatenation operator for edit operations and edit paths, which is defined as follows:
where the value of is if the condition is met, otherwise . By recursively applying Eq. 14, can be calculated in .
3.3 EP Training
With the training set that consists of pairs of image and gt string, the EP training is to find that minimizes the negative log-likelihood over :
The model can be optimized with standard back-propagation algorithm .
3.4 EP Predicting
EP Predicting is to find the string that maximizes :
However, looking for the whole answer set with Eq. 16 is extremely costly. Therefore, we develop two efficient sequence generation mechanisms for both lexicon-free and lexicon-based prediction.
Predicting without lexicon. By deeply analyzing the prediction problem, we find that in general, the string that maximizes is mostly a prefix (ended by an EOS ‘#’) of the string with the most probable edit path that transforms to where .
Therefore, we firstly find the string :
where represents the alphabet without the EOS, then use all the prefixes of (each ended by an EOS) as the candidate set :
where represents the concatenation operator for two strings, finally select the best that maximizes :
Note that the edit path with the highest probability should not include an insertion edit operation that inserts a non-EOS character, because we can remove such insertions and get a new edit path whose conditional probability is greater than the previous one.
Therefore, we can generate by beginning with the state and performing the most probable deletion or consumption operation (not considering any operation generating the EOS) at each step, till a state is reached. Since all strings in share the common prefixes, we can compute all the for in time with Eq. 14.
Predicting with a lexicon. In constrained cases, we can enumerate the strings in a lexicon , and find the most probable one. A tunable parameter is used to indicate how much we trust the lexicon since some target strings may be not contained in the lexicon. Therefore, we actually assess all the possible strings in by
where . The larger is, the more we trust the lexicon, and vice versa. Specifically, means that the lexicon can provide only some additional candidates that are treated equally to those in , while means that the generated strings are guaranteed to appear in the lexicon .
However, with the growing of lexicon size, the above enumeration-based method is extremely time-consuming. To tackle this problem, Shi et al. used a prefix tree (Trie) [9, 32] to accelerate the search process since many strings in the lexicon share common prefixes. In this work we develop a data structure called edit probability Trie (EP-Trie), which is a variant of Trie with nodes containing not only a prefix , but also a vector indicating for . The vector of a node can be computed from the vector of its parent in O() time with Eq. 14. We will demonstrate the effectiveness of EP-Trie in Sec. 4.5. Fig. 3 illustrates EP-Trie.
|Wang et al. ||57.0||76.0||62.0|
|Mishra et al. ||64.1||57.5||73.2||81.8||67.8|
|Wang et al. ||70.0||90.0||84.0|
|Goel et al. ||77.3||89.7|
|Bissacco et al. ||90.4||78.0||87.6|
|Alsharif and Pineau ||74.3||93.1||88.6|
|Almazán et al. ||91.2||82.1||89.2|
|Yao et al. ||80.2||69.3||75.9||88.5||80.3|
|Rodríguez-Serrano et al. ||76.1||57.4||70.0|
|Jaderberg et al. ||86.1||96.2||91.5|
|Su and Lu ||83.0||92.0||82.0|
|Jaderberg et al. ||97.1||92.7||95.4||80.7||98.7||98.6||93.1||90.8|
|Jaderberg et al. ||95.5||89.6||93.2||71.7||97.8||97.0||89.6||81.8|
|Shi et al. ||97.6||94.4||78.2||96.4||80.8||98.7||97.6||89.4||86.7|
|Shi et al. ||96.2||93.8||81.9||95.5||81.9||98.3||96.2||90.1||88.6|
|Lee et al. ||96.8||94.4||78.4||96.3||80.7||97.9||97.0||88.7||90.0|
|Cheng et al. ||99.3||97.5||87.4||97.1||85.9||99.2||97.3||94.2||93.3||70.6|
|Shi et al. (baseline) ||96.5||92.8||79.7||96.1||81.5||97.8||96.4||88.7||87.5|
|Cheng et al. (baseline) ||98.9||96.8||83.7||95.7||82.2||98.5||96.7||91.5||89.4||63.3|
|Shi’s + EP (ours)||99.1||97.3||85.0||96.3||86.2||98.4||97.0||93.7||93.0||68.1|
|Cheng’s + EP (ours)||99.5||97.9||88.3||96.6||87.5||98.7||97.9||94.6||94.4||73.9|
4 Performance Evaluation
We conduct extensive experiments to validate the EP method on several general recognition benchmarks under the attention framework. For a fair and comprehensive comparison, we directly employ the structures of the state-of-the-art works and replace their loss layers with EP. We first evaluate the performance of the EP-based methods, and compare them with the existing methods. Then we demonstrate the advantage of EP training over frame-wise loss based training on some real training data. Finally, we evaluate our method with the Hunspell 50k lexicon , and compare EP predicting methods with and without lexicon.
IIIT 5K-Words  (IIIT5K) is a dataset collected from the Internet with 3000 cropped word images in its test set. For each of its images, a 50-word lexicon and a 1k-word lexicon are specified, both of which contain the ground truth words as well as other randomly picked words.
Street View Text  (SVT) is collected from the Google Street View. Its test set consists of 647 word images, each of which is specified with a 50-word lexicon.
ICDAR 2003  (IC03) contains 251 scene images with text bounding boxes. Each image is associated with a 50-word lexicon defined by Wang et al. . A full lexicon that combines all lexicon words is also provided. For fair comparison, as in , we discard the images containing non-alphanumeric characters or have less than three characters. The resulting dataset contains 867 cropped images.
ICDAR 2013  (IC13) is the successor of IC03, so most of its data are inherited from IC03. It contains 1015 cropped text images, but no lexicon is associated.
ICDAR 2015  (IC15) contains 2077 cropped images. For fair comparison, we discard the images containing non-alphanumeric characters, and eventually obtain 1811 images in total. No lexicon is associated.
4.2 Implementation Details
Network Structure: The attention-based encoder-decoder framework is the-state-of-the-art technique for text recognition, which consists of two major steps: 1) Obtaining visual feature representation with a CNN-based feature extractor, such as 7-Conv-based by Shi et al.  and ResNet-based by Cheng et al. ; 2) Generating the output sequence of probability distribution with the attention model. In this work, we evaluate the proposed method by replacing the FP-based training/predicting in Shi’s and Cheng’s structures with the EP-based training/predicting.
Model Training: Our model is trained on 8-million synthetic data released by Jaderberg et al.  and 4-million synthetic data (excluding the images that contain non-alphanumeric characters) cropped from 800-thousand images released by Gupta et al.  by the ADADELTA  optimization method.
Our method is implemented under the Caffe framework. In our implementation, most modules in our model can be GPU accelerated as the CUDA backend is extensively used. All experiments are conducted on a workstation equipped with an Intel Xeon(R) E5-2650 2.30GHz CPU, an NVIDIA Tesla P40 GPU and 128GB RAM.
4.3 Comparison with Existing Methods
Tab. 1 shows the performance results of two EP-based methods, two FP-based (baseline) methods and previous methods. With Shi’s and Cheng’s structures, the EP-based methods significantly outperform the baseline methods on all benchmarks. In comparison with the existing methods, we consider both constrained and unconstrained cases. In the unconstrained cases, the EP-based method (Cheng’s + EP) outperforms all existing methods; While in the constrained cases, our method (Cheng’s + EP) performs better than all existing methods on IIIT5K, and achieves comparable results to that of the method proposed by Cheng et al. (FAN)  on SVT and IC03 datasets. However, it should be pointed out that FAN is trained with both word-level and character-level bounding box annotations, which is expensive and time consuming. On the contrary, our method is trained with only word-level ground truth. We also note that the method proposed by Jaderberg et al.  achieves the best result on IC03 with the full lexicon, but it cannot recognize out-of-training-set words.
4.4 Performance of EP Training
To demonstrate the advantage of EP in training stage, we compare the calculation of the FP and the proposed EP on some real training data. The input images, the ground truths and the recognition results are displayed in the 1st, 2nd and 3rd rows in the upper part of Fig. 4, respectively. As for the recognition results, they are actually the pd sequences. For demonstration convenience, we just display the characters that dominate the corresponding pd.
For the FP calculation, we show a vector of probability for each image on the 4th row and the FP value on the 5th row in Fig. 4. The value of the -th element in the vector represents the joint conditional probability of generating the first characters in gt from the first pds in the output sequence, and the probabilities after the EOS are ignored (regarded as ). We have the following observations on FP: 1) when the output pd sequence is well aligned to the gt (see Fig. 4 (a) and (b)), the FP declines only at the place where the character is misclassified; 2) when the output pd sequence is misaligned to the gt (see Fig. 4 (c), (d), (e) and (f)), the FP continues to decline after any occurrence of missing/superfluous character even some characters are correctly recognized. As a result, in the cases of misalignment, the error will be back-propagated to the correctly recognized pds following the missing/superfluous character, which may confuse the model training.
For the EP calculation, we display a matrix of probability for each image on the 6th row and the EP values on the 7th row. The value of the (, ) element represents where is the gt and is the output sequence of pd conditioned on the input image . We have the following observations on EP: 1) When the output sequence of pd is well aligned to the gt (see Fig. 4 (a) and (b)), no matter whether the classification result is correct, the EP value is almost equal to the FP value and the most probable edit path appears on the matrix diagonal line, that is, generating every character in gt by consuming the corresponding pd; 2) When some characters are missing/superfluous (see Fig. 4 (c) and (d) for missing character cases, (e) and (f) for superfluous character cases), the EP value is much larger than the FP value, and the most probable edit path appears under or to the right of the matrix diagonal line after the occurrence of the missing/superfluous characters. Different from the FP, the EP is indicative of inserting/deleting the missing/superfluous character and generating others by consuming the aligned pd. As a result, in the cases of misalignment, much error will be back-propagated only to the place where the missing/superfluous character occurs, and little error back-propagated to the correctly recognized pds before or after the occurrence of the missing/superfluous characters, which makes the model training process focus on the missing/superfluous characters, instead of the misaligned characters.
As EP-based methods theoretically require more calculation than baselines, we measure the time cost for training. The result shows that Shi’s/Cheng’s baselines cost 170.5ms/536.0ms per iteration, and EP based methods cost only 6.8ms/7.0ms more (batch size is set to 75).
4.5 Performance of EP Prediction
Here, we evaluate the performance of EP prediction with/without lexicon. In real world text recognition tasks, it is not easy to get the ground-truth-related lexicons. Therefore, following previous works [2, 19, 20, 31, 32], we also test our methods on a public ground-truth-unrelated lexicon Hunspell , which contains more than 50k words. As mentioned in Sec. 3.4, is a tunable super-parameter in the predicting stage. We conduct lexicon-based prediction on all datasets by varying from to . The results are shown in Fig. 5. We can see that 1) the accuracy increases when varies from to , but decreases rapidly when approaches 1 due to over-correction; 2) When is set to , the accuracy of lexicon-based prediction is exactly the same as that of lexicon-free prediction, which demonstrates the effectiveness of the proposed lexicon-free prediction method; 3) The ground-truth-unrelated lexicon is also helpful for improving text recognition performance if falls in a proper range (from to ), which validates the effectiveness of the lexicon-based prediction method.
Besides, we also test the enumeration-based and EP-Trie-based methods in terms of recognition accuracy and time cost per image while using the 50k lexicon. Our experiments show that 1) the EP-Trie-based method predicts the same result as the enumeration-based method’s, and 2) the former costs 0.11 second per image while the latter costs 2.566 seconds per image, which demonstrates the excellent efficiency of EP-Trie.
In this work, we propose a new method called edit probability for accurate scene text recognition. The new method can effectively handle the misalignment problem between the training text and the output probability distribution sequence caused by missing or superfluous characters. We conduct extensive experiments over several benchmarks to validate the proposed method, and experimental results show that EP can significantly improve recognition performance. In the future, we plan to apply the EP idea to machine translation, speech recognition, image/video caption and other related tasks.
Fan Bai and Shuigeng Zhou were partially supported by the Science and Technology Innovation Action Program of Science and Technology Commission of Shanghai Municipality (STCSM) under grant No. 17511105204.
-  J. Almazán, A. Gordo, A. Fornés, and E. Valveny. Word Spotting and Recognition with Embedded Attributes. TPAMI, 36(12):2552–2566, 2014.
-  O. Alsharif and J. Pineau. End-to-end text recognition with hybrid hmm maxout models. In ICLR, 2014.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In ICLR, 2015.
-  D. Bahdanau, J. Chorowski, D. Serdyuk, Y. Bengio, et al. End-to-end attention-based large vocabulary speech recognition. In ICASSP, pages 4945–4949, 2016.
-  D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Bengio. End-to-end attention-based large vocabulary speech recognition. In ICASSP, pages 4945–4949, 2016.
-  A. Bissacco, M. Cummins, Y. Netzer, and H. Neven. PhotoOCR: Reading Text in Uncontrolled Conditions. In ICCV, pages 785–792, 2013.
-  Z. Cheng, F. Bai, Y. Xu, G. Zheng, S. Pu, and S. Zhou. Focusing attention: Towards accurate text recognition in natural images. In ICCV, pages 5076–5084, 2017.
-  K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, abs/1409.1259, 2014.
-  R. De La Briandais. File searching using variable length keys. In Western Joint Computer Conference, pages 295–298, 1959.
-  V. Goel, A. Mishra, K. Alahari, and C. V. Jawahar. Whole is Greater than Sum of Parts: Recognizing Scene Text Words. In ICDAR, pages 398–402, 2013.
-  A. Gordo. Supervised mid-level features for word image representation. In CVPR, pages 2956–2964, 2015.
-  A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In ICML, pages 369–376, 2006.
-  A. Graves, A. r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pages 6645–6649, 2013.
-  A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic Data for Text Localisation in Natural Images. In CVPR, pages 2315–2324, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CVPR, pages 770–778, 2016.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997.
-  Hunspell. http://hunspell.github.io.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic Data and Artificial Neural Networks for Natural Scene Text Recognition. arXiv preprint arXiv:1406.2227, 2014.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Deep Structured Output Learning for Unconstrained Text Recognition. In ICLR, 2015.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading Text in the Wild with Convolutional Neural Networks. IJCV, 116(1):1–20, 2016.
-  Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional Architecture for Fast Feature Embedding. In ACM MM, pages 675–678, 2014.
-  D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny. ICDAR 2015 Competition on Robust Reading. In ICDAR, pages 1156–1160, 2015.
-  D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i. Bigorda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazàn, and L. P. de las Heras. ICDAR 2013 Robust Reading Competition. In ICDAR, pages 1484–1493, 2013.
-  S. Kim, T. Hori, and S. Watanabe. Joint CTC-Attention based End-to-End Speech Recognition using Multi-task Learning. In ICASSP, pages 4835–4839, 2017.
-  C. Y. Lee and S. Osindero. Recursive Recurrent Nets with Attention Modeling for OCR in the Wild. In CVPR, pages 2231–2239, 2016.
-  S. M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, and R. Young. ICDAR 2003 robust reading competitions. In ICDAR, pages 682–687, 2003.
-  A. Mishra, K. Alahari, and C. V. Jawahar. Scene Text Recognition using Higher Order Language Priors. In BMVC, pages 1–11, 2012.
-  L. Neumann and J. Matas. Real-time scene text localization and recognition. In CVPR, pages 3538–3545, 2012.
-  J. A. Rodriguez-Serrano, A. Gordo, and F. Perronnin. Label Embedding: A Frugal Baseline for Text Recognition. IJCV, 113(3):193–207, 2015.
-  D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Cognitive Modeling, 5(3):1, 1988.
-  B. Shi, X. Bai, and C. Yao. An End-to-End Trainable Neural Network for Image-Based Sequence Recognition and Its Application to Scene Text Recognition. IEEE TPAMI, 39(11):2298–2304, 2017.
-  B. Shi, X. Wang, P. Lyu, C. Yao, and X. Bai. Robust Scene Text Recognition with Automatic Rectification. In CVPR, pages 4168–4176, 2016.
-  B. Su and S. Lu. Accurate Scene Text Recognition Based on Recurrent Neural Network. In ACCV, pages 35–48, 2015.
-  I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, NIPS, pages 3104–3112, 2014.
-  K. Wang, B. Babenko, and S. Belongie. End-to-end scene text recognition. In ICCV, pages 1457–1464, 2011.
-  K. Wang and S. Belongie. Word Spotting in the Wild. In ECCV, pages 591–604. Springer, 2010.
-  T. Wang, D. J. Wu, A. Coates, and A. Y. Ng. End-to-end text recognition with convolutional neural networks. In ICPR, pages 3304–3308, 2012.
-  C. Yao, X. Bai, B. Shi, and W. Liu. Strokelets: A Learned Multi-scale Representation for Scene Text Recognition. In CVPR, pages 4042–4049, 2014.
-  M. D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701, 2012.