Scene text recognition has drawn remarkable attention in computer vision due to its importance in various real world applications, such as scene understanding, card information entry, street sign reading and so on. Benefiting from recent advancements of deep learning, reading text in natural images has experienced a rapid evolution during the past few years. Despite significant advances, scene text recognition in unconstrained conditions still remains as a challenging problem due to the complex situations such as blurring, distortion, orientation and uneven lighting.
Irregular text frequently appears in natural scenes, owing to curved character placement, perspective distortion, etc., as shown in Figure 1. Recognizing text with arbitrary shapes is an extremely difficult task because of unpredictable changeful text layouts. Most existing approaches mainly focus on regular text recognition, which are difficult to be generalized to distorted text. Recently, some attempts have been made towards irregular text recognition. Yang et al.  utilized a 2D attention mechanism to focus on each character and introduced an auxiliary dense character detection task to encourage the learning of text-specific visual representations. However, this method used an exhausting multi-task learning strategy and the inaccurate attention regions will cause recognition errors. Cheng et al.  proposed the arbitrary orientation network (AON) to extract scene text features in four directions and adopted a weighting mechanism to combine the four feature sequences of different directions. In order to extract features with the same dimension in four directions, AON has to resize the text image into a square shape. However, scene text generally has various aspect ratios, the strategy of scaling to a square will severely destroy the aspect ratio of the text line, especially for long text. Shi et al. 
applied a spatial transformation prior to recognition to transform the input image and rectify the text in it. The Spatial Transformer Network (STN) framework with Thin-Plate Spline (TPS) transformation is utilized to perform text rectification. Although has shown impressive results on irregular benchmarks, we observe that the rectified images may still have distortions or lose some character information, especially for severely distorted text, which results in mistaken recognition results.
In this paper, we design a novel Recurrent Calibration Network (RCN) to progressively calibrate the irregular text to boost the recognition performance. The recurrent structure iteratively refines the geometric transformation of irregular text under the same parametric capacity. In each iteration, the residual between the previous and current geometric transformation fields is estimated based on the previously calibrated image to get one step closer to the optimal one. In this way, the difficulty of each step is intrinsically relieved and the severe distortions can be eliminated in a progressive manner. Therefore, such design is capable of effectively improving the robustness of the model to large variations of text. Besides, we observe that spatial transformation on output image of the previous step cannot restore the missing character information, and the incomplete appearances will cause recognition errors as well. Therefore, we elaborate a fiducial-point refinement structure to keep the integrity of text during the recurrent process. Instead of the calibrated images, the coordinates of fiducial points are tracked and transmitted during multiple iterations. At each step, the localization network predicts the coordinate offsets with respect to the previous positions, which implicitly reflects the residual spatial transformation. Furthermore, we map the coordinates of fiducial points back to the original input image and sample from the original. In this way, while the coordinates fall outside the calibrated image, mapping back to the original means compensating some missing information. Our method can effectively calibrate the irregular text while preserving the original character information in multiple calibrations. The calibration network is jointly optimized with the recognition network under the same objective in an end-to-end scheme. Therefore, our RCN can automatically learn the optimal transformation for the following recognition task.
The main contributions are summarized as follows:
(1) We propose a Recurrent Calibration Network (RCN) to progressively calibrate the irregular text to boost the recognition performance.
(2) We design a fiducial-point refinement structure to keep the integrity of text during the recurrent process, which avoids the accumulation of missing information in the scenario of iterative calibrations.
(3) Our RCN achieves superior performance compared with the state-of-the-art methods on the challenging datasets, especially on irregular benchmarks.
2 Related Work
Scene text recognition has been widely researched and numerous methods are proposed in recent years. Traditional methods recognized scene text in a character-level manner, which first performed detection to generate multiple candidates of character locations, then applied a character classifier for recognition. Wanget al.  detected each character by sliding window, and recognized it with a character classifier trained on the HOG descriptors. Bissacco et al.  designed a fully connected network to extract character feature representations, then used a language model to recognize characters. However, the performance of these methods is limited due to the inaccurate character detector. To be free from this problem, some methods directly learned the mapping between the entire word images and target strings. For example, Jaderberg et al. 
assigned a class label to each word in a pre-defined lexicon and performed a 90k-class classification with CNN. Rodriguez-Serranoet al.  formulated the scene text recognition as a retrieval problem, which embedded word labels and word images into a common Euclidean space and found the closest word label in this space.
With the successful application of recurrent neural network (RNN) in sequence recognition, some researchers[27, 10, 28, 19]
developed sequence-based methods and combined convolutional neural network (CNN) and RNN to encode the feature representations of word images. Shiet al.  and He et al.  both used the Connectionist Temporal Classification (CTC) 
loss to calculate the conditional probabilities between the outputs of RNN and the target sequences. After that, Shiet al.  and Li et al.  introduced an attention mechanism to adaptively weight the features and select the most relevant feature representations in RNN-based decoder. In order to eliminate attention drift problem, Cheng et al.  employed a focusing attention mechanism to automatically adjust the attention weights. Bai et al.  proposed the edit probability to estimate the probability of generating a string while considering possible occurrences of missing or superfluous characters. Although these approaches have shown promising results, they cannot effectively handle with the irregular text. The main reason is that word images are encoded into 1D feature sequences, but the irregular text is not horizontally arranged.
The researches on irregular text recognition are relatively sparse. Yang et al.  adopted a 2D soft attention mechanism to focus on individual character at each step and introduced an auxiliary character detection task to learn text-specific features. Although this method recognized text in a 2D space, it used the multi-task learning framework and needed character-level annotations. Liu et al.  presented to remove the distortion of scene text by detecting and rectifying individual character. However, the detection errors will affect the performance of subsequent rectification and recognition. Cheng et al. 
proposed that the visual representation of irregular text can be described as the combination of features in four directions. This approach is able to effectively capture the deep features of irregular text, but the strategy of scaling word images to square will severely destroy the aspect ratio of text lines, especially for long text. Shiet al.  introduced a spatial transformation network with thin-plate spline (TPS) transformation to calibrate irregular text into regular one, then recognized the calibrated text with attention-based framework. Although considerably improving the performance for irregular text recognition, it is still difficult to precisely locate the fiducial points which tightly bound the text region, especially for severely distorted text. This leads to errors in parameters estimation of the TPS transformation, and hence the deformation of scene text.
Different from the existing methods, we design a novel Recurrent Calibration Network (RCN) to calibrate irregular text in a progressive manner. The calibration process is decomposed into multiple steps and the calibration results are iteratively refined. Different steps work together to eliminate text distortion for better recognition, thus the difficulty of each step is greatly relieved and large distortion also can be effectively removed.
3 The proposed Approach
The overview of our Recurrent Calibration Network (RCN) for irregular text recognition is shown in Figure 2. The irregular text is progressively calibrated to regular one, which serves as the input of subsequent recognition network. During calibration process, the distortions are eliminated step by step and the recurrent framework maintains the same parametric capacity. Besides, the fiducial-point refinement structure transmits and refines the coordinates of fiducial points during the recurrent process. At each step, the network first predicts the coordinate offsets, then gets the updated coordinates and projects them into the original input image. After that, we can estimate the TPS parameters and sample from the original image, which effectively keeps the integrity of text. RCN is able to deal with distorted text with various orientations and shapes, including severely distorted text.
3.1 Recurrent Calibration
The spatial transformer  is a learnable module which explicitly allows spatial manipulations. This can be mathematically written as
The function is parameterized as a learnable localization network, which predicts the transform parameters from the input image. Usually, it is common to calibrate scene text with spatial transform network.
However, the single-step calibration often fails to fully remove geometrical distortions, and could lead to text content loss, making unfavorable effects on the following recognition. Therefore, we design a recurrent structure to decompose the calibration process into multiple progressive steps, which mitigates the difficulty of each step. Different steps work together to calibrate irregular text for better recognition. In each iteration, the calibrated result is further refined by feeding it into the pipeline again, forming a recurrent structure. Therefore, we can refine the calibration results iteratively under the same parametric capacity. Moreover, we note that the grid generator and the sampler in STN can be combined to be a single transform function. Denote the transformation parameters estimation as , and the spatial transformation operation as , we have the following structure:
where represent the - iteration, and is the original input image. In this way, large variations can be progressively handled to match with the succeeding recognition task.
For each transformation, we predict a set of fiducial points and calculate the TPS transformation parameters based on them. Specifically, the base fiducial points on target image are defined to evenly distribute along the top and bottom image borders, which are always constant. In the feed-forward process, the localization network regresses the coordinates of fiducial points on input image . Assuming K fiducial points on both and , the parameters of the TPS transformation is represented by a matrix:
where is a matrix calculated from C:
where is a matrix comprising and . Given a point on , TPS finds its corresponding point by linearly projection:
In the transform module, a grid on is obtained by iterating over all points on
, and the calibrated image is generated by bilinear interpolation based on.
3.2 Fiducial-point Refinement
As observed in Figure 3, the iterative refinements can gradually calibrate the irregular text to be more beneficial to recognition, however, the missing information caused by inaccurate transformation cannot be restored in the direct iterations structure. This leads to incomplete character appearances and thus results in the recognition errors. We analyze that the output image samples only from the calibrated image and pixel information outside the region is discarded. In the top row of Figure 3, such effect is visible while requiring pixel information outside the calibrated images. In the scenario of iterative calibrations, the effects of missing information are accumulated during multiple transformations.
To remedy this issue, we design a fiducial-point refinement structure in the network to keep the integrity of text in the recurrent process. It is worth noting that the transformation parameters are only defined by the fiducial points. Therefore, we advocate that the transformation information is transmitted through the fiducial points, rather than being discarded after transformation. During multiple iterations, the coordinates of fiducial points are kept track of and refined. Based on the refined fiducial points, we estimate the transformation parameters and sample from the original image at each step. Therefore, the original character information is preserved until the final transformation. In order to ease the training of networks, the localization network just predicts the coordinates offsets. It is easier to optimize the residual coordinates than to optimize the original coordinates. Furthermore, the offsets and the previous coordinates are composed to describe the current positions of fiducial points. However, based on the calibrated images, the generated offsets and coordinates are both on the calibrated images. In order to sample from the original image, we need to map the coordinates back to the original image. Especially, at the first step, the input is the original image, so the mapping is an identity transformation and can be omitted. It also should be noted that the fiducial points on original image are mapped again to the top and bottom image borders on calibrated image after each transformation. Therefore, the previous fiducial points are always the canonical forms (see green points in Fig 2). In - recursion, assuming that the offsets of fiducial points are denoted as , and the coordinates of fiducial points on original image and calibrated image are and separately. The current coordinates of fiducial points on calibrated images are generated as the sum of the offsets and the previous coordinates:
is the output calibrated image of the previous step, and is the localization network. Then the current coordinates are mapped back to the original image, which serve as the updated coordinates. The mapping operation is the same as the definition in Eq. 6, and the formulation is shown as
Based on the coordinates of the updated fiducial points , the transformation parameters can be estimated as in Eq. 4 and the next calibrated image can be sampled from the original image :
In this way, the integrity of text is effectively kept because pixel information outside the calibrated image is also preserved until the final transformation. Therefore, our network can progressively calibrate the irregular text without any character information loss. Besides, all modules of the calibration process are differentiable, allowing for backpropagation within an end-to-end learning framework. Moreover, the calibration process focuses on the text region, which implicitly models the attention mechanism. Localizing the text region accurately not only can achieve satisfactory calibration, but also effectively removes the background noises.
3.3 Recognition Network
For the recognition network, we use the attention-based encoder-decoder pipeline as in 
. First, the encoder extracts a sequence of feature vectorsthrough CNN-LSTM structure. Then attention decoder recurrently generates the character sequence . At step
, the decoder dynamically weights the image feature and selects the most relevant contents to generate the probability distribution. Given the last RNN hidden state and feature sequence , the attention weights can be obtained by scoring each element in feature sequence separately:
Then we can obtain the weighted sum of sequential feature vectors, which focuses on the most relevant features:
After that, the RNN hidden state is updated and the probability distribution is estimated as follows:
Above, , , , , are the learnable parameters. Following , we exploited a bidirectional decoder, which consists of two decoders in opposite directions.
3.4 Model Training
Given the input image and corresponding ground truth , the objective function is formulated by considering both the left-to-right decoder and the right-to-left decoder as follows:
in which, denotes the parameters of both calibration network and recognition network, and are the output probability distributions of decoders in left-to-right and right-to-left order, respectively. Through the recurrent structure, we can perform multiple calibrations under the same parametric capacity. Furthermore, the calibration network and recognition network are optimized together under the same recognition loss, so the calibration network is encouraged to transform the irregular text to best match the succeeding recognition network.
|Methods||SVT-Perspective||CUTE80||IC15||Total-Text (multi-oriented)||Total-Text (curved)|
|Mishra et al.||45.7||24.7||-||-||-||-||-|
|Phan et al. ||75.6||67.0||-||-||-||-||-|
|Shi et al.||92.6||72.6||66.8||54.9||-||-||-|
|Yang et al.||93.0||80.2||75.8||69.3||-||-||-|
|Shi et al.||91.2||77.4||71.8||59.2||-||-||-|
|Liu et al. ||-||-||73.5||-||-||-||-|
|Cheng et al. ||92.6||81.6||71.5||63.9||66.2||-||-|
|Cheng et al. ||94.0||83.7||73.0||76.8||68.2||-||-|
|Shi et al. ||-||-||78.5||79.5||76.1||-||-|
In this section, we describe the details of experimental settings and evaluate the effectiveness of our method. We compare the performance of our RCN with the other approaches on both regular and irregular datasets.
Street View Text Perspective (SVT-P)  contains 639 cropped word images which are captured from the side-view angles in Google Street View. Most of them suffer from severely perspective distortion. Each image is specified with a 50 words lexicon and a full lexicon.
CUTE80  contains 288 cropped word images for testing, which is specially collected for evaluating the performance of curved text recognition. No lexicon is provided.
ICDAR 2015  contains 2077 word images including plenty of irregular text, which are taken from Google Glasses. For fair comparison, we discard the images that contain non-alphanumeric characters. No lexicon is specified.
IIIT5K  contains 3000 cropped word images collected from the Internet. Each image has a 50 words lexicon and a 1000 words lexicon.
ICDAR 2003  contains 860 cropped word images for testing. Following the evaluation protocol in , we recognize the images containing only alphanumeric characters with at least three characters. Each image is specified with a 50 words lexicon defined by . And a full lexicon consists of all the words that appear in the test set.
ToTal-Text  has annotated word images with three different text orientations including horizontal, multi-oriented, and curved text. We select the multi-oriented and curved text collections, which contain 480 and 971 images separately.
We use the synthetic dataset as the training data, including Synth90k released by Jaderberg et al.  and SynthText released by Gupta et al.  . Our model is evaluated on all other real-world test datasets without any finetuning.
|Wang et al.||57.0||-||-||-||-||76.0||62.0||-||-|
|Mishra et al.||73.2||-||64.1||57.5||-||81.8||67.8||-||-|
|Wang et al.||70.0||-||-||-||-||90.0||84.0||-||-|
|Bissacco et al.||90.4||78.0||-||-||-||-||-||-||87.6|
|Almazán et al.||89.2||-||91.2||82.1||-||-||-||-||-|
|Yao et al.||75.9||-||80.2||69.3||-||88.5||80.3||-||-|
|Rodriguez-Serrano et al.||70.0||-||76.1||57.4||-||-||-||-||-|
|Jaderberg et al.||86.1||-||-||-||-||96.2||91.5||-||-|
|Jaderberg et al.||95.4||80.7||97.1||92.7||-||98.7||98.6||93.1||90.8|
|Jaderberg et al.||93.2||71.7||95.5||89.6||-||97.8||97.0||89.6||81.8|
|Shi et al.||97.5||82.7||97.8||95.0||81.2||98.7||98.0||91.9||89.6|
|Lee et al.||96.3||80.7||96.8||94.4||78.4||97.9||97.0||88.7||90.0|
|He et al.||92.0||-||94.0||91.6||-||97.0||94.4||-||-|
|Wang and Hu||96.3||81.5||98.0||95.6||80.8||98.8||97.8||91.2||-|
|Cheng et al. ||97.1||85.9||99.3||97.5||87.4||99.2||97.3||94.2||93.3|
|Bai et al. ||96.6||87.5||99.5||97.9||88.3||98.7||97.9||94.6||94.4|
|Liu et al. ||96.1||-||96.9||94.3||86.6||98.4||97.9||93.1||92.7|
|Yang et al.||95.2||-||97.8||96.1||-||97.7||-||-||-|
|Shi et al.||95.5||81.9||96.2||93.8||81.9||98.3||96.2||90.1||88.6|
|Liu et al. ||-||84.4||-||-||83.6||-||-||91.5||90.8|
|Cheng et al. ||96.0||82.8||99.6||98.1||87.0||98.5||97.1||91.5||-|
|Shi et al. ||97.4||89.5||99.6||98.8||93.4||98.8||98.0||94.5||91.8|
4.2 Implementation Details
To validate the effectiveness of our method, we use the same network architecture and experimental settings with  to ensure fair comparison. We just replace the spatial transform process with our proposed recurrent calibration framework. When the number of iterations is one, the RCN degenerates to the base model . Notice that the calibration network uses its own shared weight matrixes, so our RCN has the same parametric capacity as the base model. Moreover, we also explore the effect of the number of iterations. The input images are resized to 64256, and the 3264 downsampled images serve as the input of the localization network. After performing spatial transformation, the calibrated images have the size of 64256 in the middle steps and 32100 in the last step. Besides, we do not use any data augmentation. The model is trained with the ADADELTA  optimization method.
During the process of testing, we report the results for both lexicon-free and lexicon-based recognition. In lexicon-free setting, we directly select the most probable character at each decoding step. Then the bidirectional decoder generates two results and we choose the result with the higher probability. In the lexicon-based setting, we pick the nearest lexicon word with the generated string under the metric of edit distance.
4.3 Depth of Recurrence
The recurrent structure progressively calibrates the irregular text and thus generates better calibrated images. We investigate the effect of the number of iterations and report the results in Table 3. As the number of iterations increases, the recognition results also gradually perform better. In particular, the performance improvements on curved text benchmarks are remarkable, which suggests the significance of iterative calibrations in recognizing severely distorted text. Moreover, our network degenerates to  when the number of iterations is one. Compared with , we do not reproduce their reported results and fall behind them. However, we still achieve better performance than , which demonstrates the effectiveness of our designs. More than three iterations result in negligible effects and the number of iterations is set as three in the following experiments.
In addition, by comparing the RCN-3 and RCN-3 (w/o FP-R), we can see that the fiducial-point refinement structure leads to significant performance promotion under the same number of iterations. The main reason is that the original character information is preserved and thus the missing information can be recovered. And the succeeding recognition network benefits from the integrity of text.
Furthermore, some examples are presented in Figure 4. As observed, the text becomes more regular with the number of iterations increases. Besides, the lost character information in the previous step can be recovered in the subsequent processes. Therefore, the integrity of text is effectively preserved during the iterative calibrations. Our network not only transforms the text in the direction that is more beneficial to recognition, but also gradually removes the background noises.
4.4 Performance on Irregular Benchmarks
|Method||SVT||IIIT5k||IC03||IC13||SVT-P||CUTE80||IC15||Total-Text (multi-oriented)||Total-Text (curved)|
Recognizing irregular text is very challenging, due to the various character placements. To validate the effectiveness of our method, we evaluate RCN on several irregular benchmarks and summarize the results in Table 1. The network with a single calibration is taken as the baseline method. As observed, our method achieves significant improvements than the baseline, and consistently outperforms the other approaches by a large margin. It is worth noting that we do not reproduce the reported results in , by comparing the baseline and . Nevertheless, we still achieve better performance on all benchmarks, which suggests the significance of our method. Especially, we outperform  by a margin of 9 percentages on CUTE80. Besides, we find that the performance gains on the curved text benchmarks are more significant than the perspective text benchmarks. The distortions of curved text are more serious and hard to model, so existing methods perform worse on curved text. By contrast, our approach can effectively calibrate severely distorted text, and hence obtains much promotion on curved text benchmarks. It also should be pointed out that the RCN not only achieves much better calibrations, but also has no any extra parameter. As shown in Figure 4, our RCN is capable of calibrating the irregular text with various degrees of deformation, including the nearly vertical text. Compared with , we do not destroy the aspect ratio of text, and thus the characters have no deformation. We also report the recognition performance on Total-Text that has not been recorded in previous literature. Our method achieves promising results on both multi-oriented and curved text collections. Furthermore, with the calibrated text from the last iteration, any kind of recognition network can be exploited. If using the focusing attention mechanism in  and the edit probability in , the performance can be further improved.
4.5 Performance on Regular Benchmarks
We also conduct experiments on regular benchmarks. Most samples in these datasets are regular text, but irregular text also exists. We report our results in Table 2. Shi et al.  corrected the results on SVT in their released code and we record the updated results. The baseline is the network with a single calibration as in . Compared with the baseline method, the RCN significantly improves the recognition performance. We can see that the RCN performs best on IIIT5k in lexicon-free setting. It is observed that IIIT5k contains many curved text, which demonstrates the advantage of RCN in dealing with severely distorted text. In the lexicon-based scenario, we achieve the best results on SVT and IIIT5k, but slightly fall behind [4, 13] on IC03 and  on IC13. However,  applied the specially designed edit probability to train their networks, while we only use the traditional frame-wise loss. Besides,  benefited from a pre-defined 90k lexicon and only recognized the words in its dictionary. It also should be remarked that  and  used the extra character bounding box annotations. By contrast, our method just requires the textual labels, which saves a lot of resources.
In this paper, we propose a Recurrent Calibration Network (RCN) for irregular text recognition. We divide the calibration process into multiple progressive steps to relieve the calibration difficulty of each step. Besides, the recurrent structure makes our network has the same parametric capacity as the network with a single spatial transformation. Moreover, we design a fiducial-point refinement structure to track and transmit the coordinates of fiducial points, instead of propagating the calibrated images. Therefore, our network is able to effectively keep the integrity of text during iterative calibrations. The lost information caused by inaccurate transformation can be recovered in subsequent processes. Furthermore, the calibration network and recognition network are jointly trained under the same objective for text recognition. Thus, the text is gradually calibrated in the direction that is more beneficial to recognition. Extensive experiments conducted on challenging benchmarks verify the effectiveness of our method, especially on irregular datasets.
-  J. Almazán, A. Gordo, A. Fornés, and E. Valveny. Word spotting and recognition with embedded attributes. IEEE transactions on pattern analysis and machine intelligence, 36(12):2552–2566, 2014.
-  F. Bai, Z. Cheng, Y. Niu, S. Pu, and S. Zhou. Edit probability for scene text recognition. arXiv preprint arXiv:1805.03384, 2018.
-  A. Bissacco, M. Cummins, Y. Netzer, and H. Neven. Photoocr: Reading text in uncontrolled conditions. In 2013 IEEE International Conference on Computer Vision, pages 785–792. IEEE, 2013.
-  Z. Cheng, F. Bai, Y. Xu, G. Zheng, S. Pu, and S. Zhou. Focusing attention: Towards accurate text recognition in natural images. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 5086–5094. IEEE, 2017.
-  Z. Cheng, X. Liu, F. Bai, Y. Niu, S. Pu, and S. Zhou. Arbitrarily-oriented text recognition. arXiv preprint arXiv:1711.04226, 2017.
-  C. K. Ch’ng and C. S. Chan. Total-text: A comprehensive dataset for scene text detection and recognition. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, volume 1, pages 935–942. IEEE, 2017.
-  A. Gordo. Supervised mid-level features for word image representation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2956–2964, 2015.
A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber.
Connectionist temporal classification: labelling unsegmented sequence
data with recurrent neural networks.
Proceedings of the 23rd international conference on Machine learning, pages 369–376. ACM, 2006.
-  A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic data for text localisation in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2315–2324, 2016.
-  P. He, W. Huang, Y. Qiao, C. C. Loy, and X. Tang. Reading scene text in deep convolutional sequences. In AAAI, volume 16, pages 3501–3508, 2016.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Deep structured output learning for unconstrained text recognition. arXiv preprint arXiv:1412.5903, 2014.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint arXiv:1406.2227, 2014.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading text in the wild with convolutional neural networks. International Journal of Computer Vision, 116(1):1–20, 2016.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017–2025, 2015.
-  M. Jaderberg, A. Vedaldi, and A. Zisserman. Deep features for text spotting. In European conference on computer vision, pages 512–528. Springer, 2014.
-  D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, et al. Icdar 2015 competition on robust reading. In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pages 1156–1160. IEEE, 2015.
-  D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i Bigorda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazan, and L. P. De Las Heras. Icdar 2013 robust reading competition. In Document Analysis and Recognition (ICDAR), 2013 12th International Conference on, pages 1484–1493. IEEE, 2013.
-  N. Ketkar. Introduction to pytorch. In Deep Learning with Python, pages 195–208. Springer, 2017.
C.-Y. Lee and S. Osindero.
Recursive recurrent nets with attention modeling for ocr in the wild.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2231–2239, 2016.
-  W. Liu, C. Chen, and K.-Y. K. Wong. Char-net: A character-aware neural network for distorted scene text recognition. In AAAI, 2018.
-  Z. Liu, Y. Li, F. Ren, W. L. Goh, and H. Yu. Squeezedtext: A real-time scene text recognition by binary convolutional encoder-decoder network. In AAAI, 2018.
-  S. M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, R. Young, K. Ashida, H. Nagai, M. Okamoto, H. Yamamoto, et al. Icdar 2003 robust reading competitions: entries, results, and future directions. International Journal of Document Analysis and Recognition (IJDAR), 7(2-3):105–122, 2005.
-  A. Mishra, K. Alahari, and C. Jawahar. Top-down and bottom-up cues for scene text recognition. In CVPR-IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2012.
-  T. Quy Phan, P. Shivakumara, S. Tian, and C. Lim Tan. Recognizing text with perspective distortion in natural scenes. In Proceedings of the IEEE International Conference on Computer Vision, pages 569–576, 2013.
-  A. Risnumawan, P. Shivakumara, C. S. Chan, and C. L. Tan. A robust arbitrary text detection system for natural scene images. Expert Systems with Applications, 41(18):8027–8048, 2014.
-  J. A. Rodriguez-Serrano, A. Gordo, and F. Perronnin. Label embedding: A frugal baseline for text recognition. International Journal of Computer Vision, 113(3):193–207, 2015.
-  B. Shi, X. Bai, and C. Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE transactions on pattern analysis and machine intelligence, 39(11):2298–2304, 2017.
-  B. Shi, X. Wang, P. Lyu, C. Yao, and X. Bai. Robust scene text recognition with automatic rectification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4168–4176, 2016.
-  B. Shi, M. Yang, X. Wang, P. Lyu, C. Yao, and X. Bai. Aster: an attentional scene text recognizer with flexible rectification. IEEE transactions on pattern analysis and machine intelligence, 2018.
-  J. Wang and X. Hu. Gated recurrent convolution neural network for ocr. In Advances in Neural Information Processing Systems, pages 335–344, 2017.
-  K. Wang, B. Babenko, and S. Belongie. End-to-end scene text recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1457–1464. IEEE, 2011.
-  T. Wang, D. J. Wu, A. Coates, and A. Y. Ng. End-to-end text recognition with convolutional neural networks. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 3304–3308. IEEE, 2012.
X. Yang, D. He, Z. Zhou, D. Kifer, and C. L. Giles.
Learning to read irregular text with attention mechanisms.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3280–3286, 2017.
-  C. Yao, X. Bai, B. Shi, and W. Liu. Strokelets: A learned multi-scale representation for scene text recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4042–4049, 2014.
-  M. D. Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.