Text in natural scene images contains rich semantic information and is of great value for image understanding. As an important task in image analysis, scene text spotting, including both text detection and word recognition, attracts much attention in computer vision field. It has many potential applications, ranging from web image searching, robot navigation, to image retrieval.
Due to the large variability of text patterns and the highly complicated background, text spotting in natural scene images is much more challenging than from scanned documents. Although significant progress has been made recently based on Deep Neural Network (DNN) techniques, it is still an open problem .
Previous works [28, 2, 12, 11] usually divide text spotting into a sequence of distinct sub-tasks. Text detection is performed firstly with a high recall to get candidate regions of text. Then word recognition is applied on the cropped text bounding boxes by a totally different approach, following word separation or character grouping. A number of techniques are also developed which solely focus on text detection or word recognition. However, the tasks of word detection and recognition are highly correlated. Firstly, the feature information can be shared between them. In addition, these two tasks can complement each other: better detection improves recognition accuracy, and the recognition information can refine detection results vice versa.
To this end, we propose an end-to-end trainable text spotter, which jointly detects and recognizes words in an image. An overview of the network architecture is presented in Figure 1
. It consists of a number of convolutional layers, a region proposal network tailored specifically for text (refer to as Text Proposal Network, TPN), an Recurrent Neural Network (RNN) encoder for embedding proposals of varying sizes to fixed-length vectors, multi-layer perceptrons for detection and bounding box regression, and an attention-based RNN decoder for word recognition. Via this framework, both text bounding boxes and word labels are provided with a single forward evaluation of the network. We do not need to process the intermediate issues such as character grouping[35, 26] or text line separation , and thus avoid error accumulation. The main contributions are thus three-fold.
An end-to-end trainable DNN is designed to optimize the overall accuracy and share computations. The network integrates both text detection and word recognition. With the end-to-end training of both tasks, the learned features are more informative, which can promote the detection performance as well as the overall performance. The convolutional features are shared by both detection and recognition, which saves processing time. To our best knowledge, this is the first attempt to integrate text detection and recognition into a single end-to-end trainable network.
We propose a new method for region feature extraction. In previous works[4, 21], Region-of-Interest (RoI) pooling layer converts regions of different sizes and aspect ratios into feature maps with a fixed size. Considering the significant diversity of aspect ratios in text bounding boxes, it is sub-optimal to fix the size after pooling. To accommodate the original aspect ratios and avoid distortion, RoI pooling is tailored to generate feature maps with varying lengths. An RNN encoder is then employed to encode feature maps of different lengths into the same size.
A curriculum learning strategy is designed to train the system with gradually more complex training data. Starting from synthetic images with simple appearance and a large word lexicon, the system learns a character-level language model and finds a good initialization of appearance model. By employing real-world images with a small lexicon later, the system gradually learns how to handle complex appearance patterns. we conduct a set of experiments to explore the capabilities of different model structures. The best model outperforms state-of-the-art results on a number of standard text spotting benchmarks, including ICDAR2011, 2015.
2 Related Work
Text spotting essentially includes two tasks: text detection and word recognition. In this section, we present a brief introduction to related works on text detection, word recognition, and text spotting systems that combine both. There are comprehensive surveys for text detection and recognition in [30, 36].
2.1 Text Detection
Text detection aims to localize text in images and generate bounding boxes for words. Existing approaches can be roughly classified into three categories: character based, text-line based and word based methods.
Character based methods firstly find characters in images, and then group them into words. They can be further divided into sliding window based [12, 29, 35, 26] and Connected Components (CC) based [9, 20, 3] methods. Sliding window based approaches use a trained classifier to detect characters across the image in a multi-scale sliding window fashion. CC based methods segment pixels with consistent region properties (, color, stroke width, density, ) into characters. The detected characters are further grouped into text regions by morphological operations, CRF or other graph models.
Text-line based methods detect text lines firstly and then separate each line into multiple words. The motivation is that people usually distinguish text regions initially even if characters are not recognized. Based on the observation that a text region usually exhibits high self-similarity to itself and strong contrast to its local background, Zhang  propose to extract text lines by exploiting symmetry property. Zhang  localize text lines via salient maps that are calculated by fully convolutional networks. Post-processing techniques are also proposed in  to extract text lines in multiple orientations.
More recently, a number of approaches are proposed to detect words directly in the images using DNN based techniques, such as Faster R-CNN , YOLO , SSD . By extending Faster R-CNN, Zhong  design a text detector with a multi-scale Region Proposal Network (RPN) and a multi-level RoI pooling layer. Tian  develop a vertical anchor mechanism, and propose a Connectionist Text Proposal Network (CTPN) to accurately localize text lines in natural image. Gupta  use a Fully-Convolutional Regression Network (FCRN) for efficient text detection and bounding box regression, motivated by YOLO. Similar to SSD, Liao  propose “TextBoxes” by combining predictions from multiple feature maps with different resolutions, and achieve the best-reported performance on text detection with datasets in [14, 28].
2.2 Text Recognition
Traditional approaches to text recognition usually perform in a bottom-up fashion, which recognize individual characters firstly and then integrate them into words by means of beam search , dynamic programming , . In contrast, Jaderberg  consider word recognition as a multi-class classification problem, and categorize each word over a large dictionary (about K words, , class labels) using a deep CNN.
With the success of RNNs on handwriting recognition , He  and Shi  solve word recognition as a sequence labelling problem. RNNs are employed to generate sequential labels of arbitrary length without character segmentation, and Connectionist Temporal Classification (CTC) is adopted to decode the sequence. Lee and Osindero  and Shi  propose to recognize text using an attention-based sequence-to-sequence learning structure. In this manner, RNNs automatically learn the character-level language model presented in word strings from the training data. The soft-attention mechanism allows the model to selectively exploit local image features. These networks can be trained end-to-end with cropped word image patches as input. Moreover, Shi 
insert a Spatial Transformer Network (STN) to handle words with irregular shapes.
2.3 Text Spotting Systems
Text spotting needs to handle both text detection and word recognition. Wang  take the locations and scores of detected characters as input and try to find an optimal configuration of a particular word in a given lexicon, based on a pictorial structures formulation. Neumann and Matas 
use a CC based method for character detection. These characters are then agglomerated into text lines based on heuristic rules. Optimal sequences are finally found in each text line using dynamic programming, which are the recognized words. These recognition-based pipelines lack explicit word detection.
Some text spotting systems firstly generate text proposals with a high recall and a low precision, and then refine them during recognition with a separate model. It is expected that a strong recognizer can reject false positives, especially when a lexicon is given. Jaderberg  use an ensemble model to generate text proposals, and then adopt the word classifier in  for recognition. Gupta  employ FCRN for text detection and the word classifier in  for recognition. Most recently, Liao  combine “TextBoxes” and “CRNN” , which yield state-of-the-art performance on text spotting task with datasets in [14, 28].
Our goal is to design an end-to-end trainable network, which simultaneously detects and recognizes all words in images. Our model is motivated by recent progresses in DNN models such as Faster R-CNN  and sequence-to-sequence learning [24, 16], but we take the special characteristics of text into consideration. In this section, we present a detailed description of the whole system.
Notation All bold capital letters represent matrices and all bold lower-case letters denote column vectors. concatenates the vectors and vertically, while stacks and horizontally (column wise). In the following, the bias terms in neural networks are omitted.
3.1 Overall Architecture
The whole system architecture is illustrated in Figure 1
. Firstly, the input image is fed into a convolutional neural network that is modified from VGG-net . VGG- consists of layers of layers of max-pooling, and Fully-Connected (FC) layers. Here we remove FC layers. As long as text in images can be relatively small, we only keep the st, nd and th layers of max-pooling, so that the down-sampling ratio is increased from to .
Given the computed convolutional features, TPN provides a list of text region proposals (bounding boxes). Then, Region Feature Encoder (RFE) converts the convolutional features of proposals into fixed-length representations. These representations are further fed into Text Detection Network (TDN) to calculate their textness scores and bounding box offsets. Next, RFE is applied again to compute fixed-length representations of text bounding boxes provided by TDN (see purple paths in Figure 1). Finally, Text Recognition Network (TRN) recognizes words in the detected bounding boxes based on their representations.
3.2 Text Proposal Network
Text proposal network (TPN) is inspired from RPN [21, 34], which can be regarded as a fully convolutional network. As presented in Figures 2, it takes convolutional features as input, and outputs a set of bounding boxes accompanied with “textness” scores and coordinate offsets which indicate scale-invariant translations and log-space height/width shifts relative to pre-defined anchors, as in .
Considering that word bounding boxes usually have larger aspect ratios () and varying scales, we designed anchors with scales (with box areas of , , , ) and aspect ratios (, , , , , ).
Inspired by , we apply two -d rectangle convolutional filters of different sizes ( and ) on the feature maps to extract both local and contextual information. The rectangle filters lead to wider receptive fields, which is more suitable for word bounding boxes with large aspect ratios. The resulting features are further concatenated to -d vectors and fed into two sibling layers for text/non-text classification and bounding box regression.
3.3 Region Feature Encoder
To process RoIs of different scales and aspect ratios in a unified way, most existing works re-sample regions into fixed-size feature maps via pooling . However, for text, this approach may lead to significant distortion due to the large variation of word lengths. For example, it may be unreasonable to encode short words like “Dr” and long words like “congratulations” into feature maps of the same size. In this work, we propose to re-sample regions according to their respective aspect ratios, and then use RNNs to encode the resulting feature maps of different lengths into fixed length vectors. The whole region feature encoding process is illustrated in Figure 3.
For an RoI of size , we perform spatial max-pooling with a resulting size of
where the expected height is fixed and the width is adjusted to keep the aspect ratio as (twice the original aspect ratio) unless it exceeds the maximum length . Note that here we employ a pooling window with an aspect ratio of , which benefits the recognition of narrow shaped characters, like ‘i’, ‘l’, etc., as stated in .
Next, the resampled feature maps are considered as a sequence and fed into RNNs for encoding. Here we use Long-Short Term Memory (LSTM) instead of vanilla RNN to overcome the shortcoming of gradient vanishing or exploding. The feature maps after the above varying-size RoI pooling are denoted as , where is the number of columns and is the channel size. We flatten the features in each column, and obtain a sequence which are fed into LSTMs one by one. Each time LSTM units receive one column of feature , and update their hidden state by a non-linear function: . In this recurrent fashion, the final hidden state (with size ) captures the holistic information of and is used as a RoI representation with fixed dimension.
3.4 Text Detection and Recognition
Text Detection Network (TDN) aims to judge whether the proposed RoIs are text or not and refine the coordinates of bounding boxes once again, based on the extracted region features . Two fully-connected layers with neurons are applied on , followed by two parallel layers for classification and bounding box regression respectively.
The classification and regression layers used in TDN are similar to those used in TPN. Note that the whole system refines the coordinates of text bounding boxes twice: once in TPN and then in TDN. Although RFE is employed twice to calculate features for proposals produced by TPN and later the detected bounding boxes provided by TDN, the convolutional features only need to be computed once.
Text Recognition Network (TRN) aims to predict the text in the detected bounding boxes based on the extracted region features. As shown in Figure 4, we adopt LSTMs with attention mechanism [19, 24] to decode the sequential features into words.
Firstly, hidden states at all steps from RFE are fed into an additional layer of LSTM encoder with units. We record the hidden state at each time step and form a sequence of
. It includes local information at each time step and works as the context for the attention model.
As for decoder LSTMs, the ground-truth word label is adopted as input during training. It can be regarded as a sequence of tokens where and represent the special tokens START and END respectively. We feed decoder LSTMs with vectors: , , , , where is the concatenation of the encoder’s last hidden state and the attention output with guidance equals to zero; and , for , is made up of the embedding of the -th token and the attention output guided by the hidden state of decoder LSTMs in the previous time-step . The embedding function is defined as a linear layer followed by a non-linearity.
The attention function is defined as follows:
where is the variable-length sequence of features to be attended, is the guidance vector, and are linear embedding weights to be learned, is the attention weights of size , and is a weighted sum of input features.
At each time-step , the decoder LSTMs compute their hidden state and output vector as follows:
where the LSTM  is used for the recurrence formula , and linearly transforms hidden states to the output space of size , including case-insensitive characters, digits, a token representing all punctuations like “!” and “?”, and a special END token.
At test time, the token with the highest probability in previous outputis selected as the input token at step , instead of the ground-truth tokens . The process is started with the START token, and repeated until we get the special END token.
3.5 Loss Functions and Training
Loss Functions As we demonstrate above, our system takes as input an image, word bounding boxes and their labels during training. Both TPN and TDN employ the binary logistic loss for classification, and smooth loss  for regression. So the loss for training TPN is
where is the number of randomly sampled anchors in a mini-batch and is the number of positive anchors in this batch (the range of positive anchor indices is from to ). The mini-batch sampling and training process of TPN are similar to that used in . An anchor is considered as positive if its Intersection-over-Union (IoU) ratio with a ground-truth is greater than and considered as negative if its IoU with any ground-truth is smaller than . In this paper, is set to and is at most . denotes the predicted probability of anchor being text and is the corresponding ground-truth label ( for text, for non-text). is the predicted coordinate offsets for anchor , and is the associated offsets for anchor relative to the ground-truth. Bounding box regression is only for positive anchors, as there is no ground-truth bounding box matched with negative ones.
For the final outputs of the whole system, we apply a multi-task loss for both detection and recognition:
where is the number of text proposals sampled from the output of TPN, and is the number of positive ones. The thresholds for positive and negative anchors are set to and respectively, which are less strict than those used for training TPN. In order to mine hard negatives, we first apply TDN on randomly sampled negatives and select those with higher textness scores. and are the outputs of TDN. is the ground-truth tokens for sample and is the corresponding output sequence of decoder LSTMs. denotes the cross entropy loss on , where represents the predicted probability of the output being at time-step and the loss on is ignored.
Following , we use an approximate joint training process to minimize the above two losses together (ADAM  is adopted), ignoring the derivatives with respect to the proposed boxes’ coordinates.
Data Augmentation We sample one image per iteration in the training phase. Training images are resized to shorter side of pixels and longer side of at most pixels. Data augmentation is also implemented to improve the robustness of our model, which includes:
randomly rescaling the width of the image by ratio or without changing its height, so that the bounding boxes have more variable aspect ratios;
randomly cropping a subimage which includes all text in the original image, padding withpixels on each side, and resizing to pixels on shorter side.
Curriculum Learning In order to improve generalization and accelerate the convergence speed, we design a curriculum learning  paradigm to train the model from gradually more complex data.
We generate k images containing words in the “Generic” lexicon  of size k by using the synthetic engine proposed in . The words are randomly placed on simple pure colour backgrounds ( words per image on average). We lock TRN initially, and train the rest parts of our proposed model on these synthetic images in the first k iterations, with convolutional layers initialized from the trained VGG-
model and other parameters randomly initialized according to Gaussian distribution. For efficiency, the first four convolutional layers are fixed during the entire training process. The learning rate is set tofor parameters in the rest of convolutional layers and for randomly initialized parameters.
In the next k iterations, TRN is added and trained with a learning rate of , together with other parts in which the learning rate for randomly initialized parameters is halved to . We still use the k synthetic images as they contain a comprehensive k word vocabulary. With this synthetic dataset, a character-level language model can be learned by TRN.
In the next k iterations, the training examples are randomly selected from the “Synth800k”  dataset, which consists of k images with averagely synthetic words placed on each real scene background. The learning rate for convolutional layers remains at , but that for others is halved to .
Totally real-world training images from ICDAR2015 , SVT  and AddF2k  datasets are employed for another k iterations. In this stage, all the convolutional layers are fixed and the learning rate for others is further halved to . These real images contain much less words than synthetic ones, but their appearance patterns are much more complex.
In this section, we perform experiments to verify the effectiveness of the proposed method.All experiments are implemented on an NVIDIA Tesla M GPU with GB memory. We rescale the input image into multiple sizes during test phase in order to cover the large range of bounding box scales, and sample proposals with the highest textness scores produced by TPN. The detected bounding boxes are then merged via NMS according to their textness scores and fed into TRN for recognition.
Criteria We follow the evaluation protocols used in ICDAR Robust Reading Competition : a bounding box is considered as correct if its IoU ratio with any ground-truth is greater than and the recognized word also matches, ignoring the case. The words that contain alphanumeric characters and no longer than three characters are ignored. There are two evaluation protocols used in the task of scene text spotting: “End-to-End” and “Word Spotting”. “End-to-End” protocol requires that all words in the image are to be recognized, with independence of whether the string exists or not in the provided contextualised lexicon, while “Word Spotting” on the other hand, only looks at the words that actually exist in the lexicon provided, ignoring all the rest that do not appear in the lexicon.
Datasets The commonly used datasets for scene text spotting include ICDAR , ICDAR  and Street View Text (SVT) . We use the dataset for the task of “Focused Scene Text” in ICDAR Robust Reading Competition, which consists of images for training and images for test. In addition, it provides specific lists of words as lexicons for reference in the test phase, , “Strong”, “Weak” and “Generic”. “Strong” lexicon provides words per-image including all words appeared in the image. “Weak” lexicon contains all words appeared in the entire dataset, and “Generic” lexicon is a k word vocabulary proposed by . ICDAR does not provide any lexicon. So we only use the k vocabulary as context. SVT dataset consists of images for training and images for test. These images are harvested from Google Street View and often have a low resolution. It also provides a “Strong” lexicon with words per-image. As there are unlabelled words in SVT, we only evaluate the “Word-Spotting” performance on this dataset.
4.1 Evaluation under Different Model Settings
|Deep2Text II+ |
In order to show the effectiveness of our proposed varying-size RoI pooling (see Section 3.3) and the attention mechanism (see Section 3.4), we examine the performance of our model with different settings in this subsection. With the fixed RoI pooling size of , we denote the models with and without the attention mechanism as “Ours Atten+Fixed” and “Ours NoAtten+Fixed” respectively. The model with both attention and varying-size RoI pooling is denoted as “Ours Atten+Vary”, in which the size of feature maps after pooling is calculated by Equ. (1) with and .
Although the last hidden state of LSTMs encodes the holistic information of RoI image patch, it still lacks details. Particularly for a long word image patch, the initial information may be lost during the recurrent encoding process. Thus, we keep the hidden states of encoder LSTMs at each time step as context. The attention model can choose the corresponding local features for each character during decoding process, as illustrated in Figure 5. From Table 1, we can see that the model with attention mechanism, namely “Ours Atten+Fixed”, achieves higher F-measures on all evaluated data than “Ours NoAtten+Fixed” which does not use attention.
One contribution of this work is a new region feature encoder, which is composed of a varying-size RoI pooling mechanism and an LSTM sequence encoder. To validate its effectiveness, we compare the performance of models “Ours Atten+Vary” and “Ours Atten+Fixed”. Experiments shows that varying-size RoI pooling performs significantly better for long words. For example, “Informatikforschung” can be recognized correctly by “Ours Atten+Vary”, but not by “Ours Atten+Fixed” (as shown in Figure 5), because a large portion of information for long words is lost by fixed-size RoI pooling. As illustrated in Table 1, adopting varying-size RoI pooling (“Ours Atten+Vary”) instead of fixed-size pooling (“Ours Atten+Fixed”) makes F-measures increase around for ICDAR, for ICDAR and for SVT with strong lexicon used.
4.2 Joint Training vs. Separate Training
Previous works [11, 6, 17] on text spotting typically perform in a two-stage manner, where detection and recognition are trained and processed separately. The text bounding boxes detected by a model need to be cropped from the image and then recognized by another model. In contrast, our proposed model is trained jointly for both detection and recognition. By sharing convolutional features and RoI encoder, the knowledge learned from the correlated detection and recognition tasks can be transferred between each other and results in better performance for both tasks.
To compare with the model “Ours Atten+Vary” which is jointly trained, we build a two-stage system (denoted as “Ours Two-stage”) in which detection and recognition models are trained separately. For fair comparison, the detector in “Ours Two-stage” is built by removing the recognition part from model “Ours Atten+Vary” and trained only with the detection objective (denoted as “Ours DetOnly”). As to recognition, we employ CRNN  that produces state-of-the-art performance on text recognition. Model “Ours Two-stage” firstly adopts “Ours DetOnly” to detect text with the same multi-scale inputs. CRNN is then followed to recognize the detected bounding boxes. We can see from Table 1 that model “Ours Two-stage” performs worse than “Ours Atten+Vary” on all the evaluated datasets.
Furthermore, we also compare the detection-only performance of these two systems. Note that “Ours DetOnly” and the detection part of “Ours Atten+Vary” share the same architecture, but they are trained with different strategies: “Ours DetOnly” is optimized with only the detection loss, while “Ours Atten+Vary” is trained with a multi-task loss for both detection and recognition. In consistent with the “End-to-End” evaluation criterion, a detected bounding box is considered to be correct if its IoU ratio with any ground-truth is greater than . The detection results are presented in Table 2. Without any lexicon used, “Ours Atten+Vary” produces a detection performance with F-measures of on ICDAR and on ICDAR, which are averagely higher than those given by “Ours DetOnly”. This result illustrates that detector performance can be improved via joint training.
4.3 Comparison with Other Methods
In this part, we compare the text spotting results of “Ours Atten+Vary” with other state-of-the-art approaches. As shown in Table 1, “Ours Atten+Vary” outperforms all compared methods on most of the evaluated datasets. In particular, our method shows an significant superiority when using a generic lexicon. It leads to a higher recall on average than the state-of-the-art TextBoxes , using only input scales compared with scales used by TextBoxes.
Several text spotting examples are presented in Figure 6, which demonstrate that model “Ours Atten+Vary” is capable of dealing with words of different aspect ratios and orientations. In addition, our system is able to recognize words even if their bounding boxes do not cover the whole words, as it potentially learned a character-level language model from the synthetic data.
Using an M40 GPU, model “Ours Atten+Vary” takes approximately to process an input image of pixels. It takes nearly to compute the convolutional features, for text proposal calculation, for RoI encoding, for text detection and for word recognition. On the other hand, model “Ours Two-stage” spends around for word recognition on the same detected bounding boxes, as it needs to crop the word patches, and re-calculate the convolutional features during recognition.
In this paper we presented a unified end-to-end trainable DNN for simultaneous text detection and recognition in natural scene images. A novel RoI encoding method was proposed, considering the large diversity of aspect ratios of word bounding boxes. With this framework, scene text can be detected and recognized in a single forward pass efficiently and accurately. Experimental results illustrate that the proposed method can produce impressive performance on standard benchmarks. One of potential future works is on handling images with multi-oriented text.
6.1 Training Data with Different Levels of Complexity
In this paper, we design a curriculum learning  paradigm to train the model from gradually more complex data. Here, we would like to give a detailed introduction to the used training data.
Firstly, we generate k images containing words in the “Generic” lexicon  of size k by using the synthetic engine proposed in . The words are randomly placed on simple pure colour backgrounds, as shown in Figure 7. Note that these k images contain a comprehensive word vocabulary, so a character-level language model can be learned by Text Recognition Network (TRN).
Next, the “Synth800k”  dataset is used to further tune the model, which contains k images created via blending rendered words into real natural scenes, as presented in Figure 8. These images have more complex background, so that the model will be further fine-tuned to handle complicated appearance patterns.
Finally, we use real-world images to fine-tune our model. They are naturally captured images explicitly focusing around the text of interest, as shown in Figure 9.
6.2 Varying-size Pooling vs. Fixed-size Pooling
In this work, we propose a new method for encoding an image region of variable size into a fixed-length representation. Unlike the conventional method [4, 21], where each Region-of-Interest (RoI) is pooled into a fixed-size feature map, we pool RoIs according to their aspect ratios. Here we present more experimental results in Figure 10 to verify the effectiveness of our proposed encoding method. Compared to fixed-size pooling, our method (varying-size pooling) divide image regions for long words into more parts ( versus ), such that information for every character is reserved.
6.3 Additional Text Spotting Results
Here we present more test results by our proposed model, as shown in Figure 11.
-  Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proc. Int. Conf. Mach. Learn., 2009.
-  A. Bissacco, M. Cummins, Y. Netzer, and H. Neven. Photoocr: Reading text in uncontrolled conditions. In Proc. IEEE Int. Conf. Comp. Vis., 2013.
-  M. Busta, L. Neumann, and J. Matas. Fastext: Efficient unconstrained scene text detector. In Proc. IEEE Int. Conf. Comp. Vis., 2015.
-  R. Girshick. Fast r-cnn. In Proc. IEEE Int. Conf. Comp. Vis., 2015.
-  A. Graves, S. Fernandez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proc. Int. Conf. Mach. Learn., 2006.
-  A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic data for text localisation in natural images. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  P. He, W. Huang, Y. Qiao, C. C. Loy, and X. Tang. Reading scene text in deep convolutional sequences. In Proc. National Conf. Artificial Intell., 2016.
-  S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, 1997.
-  W. Huang, Z. Lin, J. Yang, and J. Wang. Text localization in natural images using stroke feature transform and text covariance descriptors. In Proc. IEEE Int. Conf. Comp. Vis., 2013.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. In Proc. Adv. Neural Inf. Process. Syst., 2014.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading text in the wild with convolutional neural networks. Int. J. Comp. Vis., 116(1):1–20, 2015.
-  M. Jaderberg, A. Vedaldi, and A. Zisserman. Deep features for text spotting. In Proc. Eur. Conf. Comp. Vis., 2014.
-  J.Redmon, S. Divvala, R. B. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny. ICDAR 2015 robust reading competition. In Proc. Int. Conf. Doc. Anal. Recog., 2015.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
-  C.-Y. Lee and S. Osindero. Recursive recurrent nets with attention modeling for ocr in the wild. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  M. Liao, B. Shi, X. Bai, X. Wang, and W. Liu. Textboxes: A fast text detector with a single deep neural network. In Proc. National Conf. Artificial Intell., 2017.
-  W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In Proc. Eur. Conf. Comp. Vis., 2016.
M.-T. Luong, H. Pham, and C. D. Manning.
Effective approaches to attention-based neural machine translation.In
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, 2015.
-  L. Neumann and J. Matas. Scene text localization and recognition with oriented stroke detection. In Proc. IEEE Int. Conf. Comp. Vis., 2013.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. Adv. Neural Inf. Process. Syst., 2015.
-  A. Shahab, F. Shafait, and A. Dengel. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images. In Proc. Int. Conf. Doc. Anal. Recog., 2011.
-  B. Shi, X. Bai, and C. Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. CoRR, abs/1507.05717, 2015.
-  B. Shi, X. Wang, P. Lv, C. Yao, and X. Bai. Robust scene text recognition with automatic rectification. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
-  S. Tian, Y. Pan, C. Huang, S. Lu, K. Yu, and C. L. Tan. Text flow: A unified text detection system in natural scene images. In Proc. IEEE Int. Conf. Comp. Vis., 2015.
-  Z. Tian, W. Huang, T. He, P. He, and Y. Qiao. Detecting text in natural image with connectionist text proposal network. In Proc. Eur. Conf. Comp. Vis., 2016.
-  K. Wang, B. Babenko, and S. Belongie. End-to-end scene text recognition. In Proc. IEEE Int. Conf. Comp. Vis., 2011.
-  T. Wang, D. Wu, A. Coates, and A. Y. Ng. End-to-end text recognition with convolutional neural networks. In Proc. IEEE Int. Conf. Patt. Recogn., pages 3304–3308, 2012.
-  Q. Ye and D. Doermann. Text detection and recognition in imagery: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 37(7):1480–1500, 2015.
-  X.-C. Yin, X. Yin, K. Huang, and H.-W. Hao. Robust text detection in natural scene images. IEEE Trans. Pattern Anal. Mach. Intell., 36(5):970–983, 2014.
-  Z. Zhang, W. Shen, C. Yao, and X. Bai. Symmetry-based text line detection in natural scenes. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2015.
-  Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, and X. Bai. Multi-oriented text detection with fully convolutional networks. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  Z. Zhong, L. Jin, S. Zhang, and Z. Feng. Deeptext:a unified framework for text proposal generation and text detection in natural images. CoRR, abs/1605.07314, 2016.
-  S. Zhu and R. Zanibbi. A text detection system for natural scenes with convolutional feature learning and cascaded classification. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2016.
-  Y. Zhu, C. Yao, and X. Bai. Scene text detection and recognition: recent advances and future trends. Frontiers of Computer Science, 10(1):19–36, 2016.