Scene text recognition is an important computer vision task that reading text from images. It is an indispensable component for image understanding from high-level semantic information retrieving. Many challenging applications such as license reading, table analyzing and document processing, benefit from the maturity of Optical Character Recognition (OCR). However, scene text recognition remains an unsolved problem because of drastic variations in appearance, illumination, noise, layout, and background.
Recent advances in scene text recognition are mostly inspired by the success of deep learning techniques. Among them,Connectionist Temporal Classification (CTC) and Attentional sequence recognition (Attn) are the two most popular methods incorporated with Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs) or some other basic deep learning components, to form a framework named encoder-decoder. Both of them can handle text images with variable length. Attn based approaches are more accurate in most scenarios while CTC based ones, such as[28, 19, 6]
, achieve better efficiency and are easier to train. However, some drawbacks still exist in these two mainstream methods respectively. Firstly, CTC based approaches predict all characters in one time so the sequential and semantic dependency between output characters is not modeled explicitly. The performance of CTC degrades when part of the text images are polluted by illumination, noise or some other reasons. Even combined with RNNs in the encoding state, the decoder still suffers from lacking semantic context between characters. Therefore, predefined lexicons are always needed to refine the output of CTC based approaches. Secondly, Attn based methods can embed the language model into the attentional recurrent decoder to depict character dependencies. However, in Natural Language Processing (NLP), to construct a reasonable language model, massive data (often billions) is indispensable. Two largest scene text dataset[10, 8] only contain millions of text images, besides the images are synthesized. Not surprisingly, Attn based methods often tend to overfit on the limited training data and remain unsatisfied on the real world scene text recognition.
In this paper, to deal with the problems mentioned above, a Recified Attentional Double Supervised network named ReADS is proposed as shown in Figure 1
. Our method first rectifies the input images by adopting a Spatial Transformer Network (STN) as in. Then in the encoder, the backbone is built up by attentional CNNs shared by the CTC and the Attn branches. Moreover, a multi-layer Bidirectional LSTM is also adopted for the attention branch. Finally, in the decoder, both CTC and Attn are applied as double supervisions. The CTC branch mainly concentrates on visual feature representation inference and the Attn branch relies on semantic context modeling of characters. Thus our method give the final recognition result from two different views in a decoupled style.
In summary, our main contributions are as follows:
We propose a novel double supervised network which predicts text from both image inherent texture and semantic context by CTC and Attn. The proposed method can overcome the shortcomings in previous single supervised approaches and achieve better accuracy.
A simple but effective attention mechanism is applied in the encoder, which discriminates foreground text features from messy backgrounds. A rectified module is also used in front of the encoder for handling irregular text images.
Our proposed method achieves the state-of-the-art performance on both regular and irregular scene text benchmarks. Especially, our method is only trained on synthetic data and no real world text images are used.
Ii Related work
Early works in text recognition mainly focus on document text. Some handcrafted features, such as connected components  or Hough Voting , are incorporated with the sliding window technique [36, 34]
to detect and recognize every single character. Then a graph-based inference is utilized to find words with the max probability from the characters. However, such kinds of traditional methods can not handle some intractable scenarios such as scene text because of noisy background, various appearance and uncontrollable illumination. With the rise of deep learning, recent approaches tend to treat text recognition as a sequence-to-sequence problem. And most of them are based on an asymmetrical encoder-decoder framework. Typically, the encoder consists of some CNNs and RNNs while the decoder mainly relies on two implementations, namely CTC and Attn.
Ii-a CTC based Text Recognition
CTC  is originally proposed by Graves et al. for speech recognition. Since both speech and text recognition can be regarded as the sequence-to-sequence problems, CTC is prevailing in recent scene text recognition researches. In , an end-to-end trainable network named CRNN is proposed to directly recognize text lines without character level annotations. CRNN encodes text images using CNNs and bi-directional RNNs, then decodes by CTC. Xie et al.  utilize CTC to deal with online handwritten Chinese text recognition. The proposed method first transforms input trajectories into fixed size images, then the images are fed into a network whose structure is similar to CRNN. Liu et al.  present the STAR-Net which goes one step further. To achieve a better performance, the STAR-Net employs a spatial transformer to rectify input text images and a deeper CNN using residual structures is adopted to enhance the model representation. In industrial applications, Facebook conducts an OCR system called Rosetta  to process daily uploaded images. To compromise between effectiveness and efficiency, the bi-directional RNNs before CTC are removed and a complicated training strategy is invented to fit the massive real world data. It should be noted that the approaches mentioned above mostly focus on adjusting network architectures in the encoding phase and the inherent problem of CTC is untouched.
Ii-B Attn based Text Recognition
As to the methods based on Attn, they can be traced back to the work in 
. Inspired by the research in image captioning, five variations of RNNs are proposed in this paper and the model called R2AM outperforms the others, which can be considered as the prototype of subsequent Attn based approaches. Then some genres emerge. In[29, 2, 22, 40], various rectification modules are applied to transform irregular text in images to regular ones. Besides, a fractional pickup strategy is designed to improve attention sensitivity in . While some other researchers concentrate on boosting the attention mechanisms. Cheng et al.  propose the Focusing Attention Network (FAN) to alleviate attention shifts and acquires more accurate position predictions. In , to tackle with arbitrarily-oriented text images, feature maps from four directions are extracted and merged, then fed into Attn as attention maps. To address the irregular text problem, Li et al.  employ a 2-dimensional attention map in the Attn based decoder. Recently, to make the most of attention mechanisms and reduce the training time cost, transformer-like  structures are introduced in  to replace RNNs in the Attn.
The ReADS is composed of three parts, the rectifier, the encoder, and the decoder. An overview of the whole network architecture is provided in Figure 1. In detail, the rectifier is a light-weighted STN which adjusts the input images into more readable images of the same size. Next, the encoder with two branches takes rectified images and outputs two types of representations. Finally, these two outputs are decoded by CTC and Attn separately in the decoding phase.
Since the Thin-Plate-Spline (TPS)  proves to be more effective on perspective and curved text images than the simple affine transformation, we adopt an STN with a predicted TPS in this stage which makes the rectifier learnable. The STN consists of three parts which are the localization network, the grid generator and the sampler. First a set of control points is predicted by the localization network. Then the grid generator obtains a TPS transformation via the control points and outputs a sampling grid. Finally, the rectified images are sampled from the original images by the sampler.
Iii-A1 Localization Network
A TPS transformation is calculated from two sets of control points with equal size , denoted by and respectively. Here is the list of source control points, where is the -th point. is the target control points which is similar to . The are placed evenly along the top and bottom borders of the output image at fixed positions. Thus we only need to obtain and it is acquired by the localization network. This network processes the input image via convolutional and pooling layers, then regresses by a fully-connected layer whose output size is . It should be noted that the whole rectifier is differentiable and can be trained by the back-propagated gradients.
Iii-A2 Grid Generator
Given and , the gird generator computes a 2D TPS transformation and uses it to generate a sampling grid that maps every location in the rectified image to the input image. The function of this transformation is shown as follows,
where , , and is a function which can be calculated by these formations:
By solving a linear system with boundary conditions,
the coefficients , and can be found. and are the first and second column of . and are the and coordinates of . Then the Eq. (1) can be rewritten as
where is a matrix and .
The sampler obtains every pixel value in the rectified image by interpolating in the input image. When the location is outside the input image, its value is clipped to keep inside the image. A bilinear interpolation strategy is applied which computes pixel values in the rectified image from the four nearest sampling pixels. Same as the localization network, the sampler is totally differentiable and this allows the rectifier to be optimized by gradients based algorithms.
Previous methods mainly employ classic CNN structures (e.g., VGG , ResNet  and InceptionNet ) as visual feature extractors in the encoder. However, there are various disturbances in scene text images. So we introduce attention mechanisms into the encoder design which can suppress invalid backgrounds and highlight the useful foregrounds. Constrained by the receptive fields in convolutional layers, RNNs are often utilized to enlarge context regions after the visual feature extractor. However, the local texture representation will be impaired after RNNs. We elaborate on two branches to extract features from different receptive field scales. The first branch directly passes the visual feature map outputted by CNNs to the decoder. The second branch processes the visual feature map by stacked Bi-LSTMs before feeding it into the decoder.
Iii-B1 Attentional Residual Block
In the encoding stage, we adopt attentional residual blocks to extract visual features. The structure of an attentional residual block is depicted in Figure 2. An attention mechanism called CBAM  is implemented before merging the trunks and shortcuts. The CBAM learns channel-wise and spatial attentions separately with negligible parameter overheads. Moreover, it can be plugged into any residual blocks.
The CBAM consists of two attention modules: the channel attention and spatial attention. Given an intermediate feature map , firstly an average pooling operation
and a max-pooling operationare computed simultaneously to describe the global distinctive features. Then the channel attention mask is forwarded by a set of shared convolutional layers with both descriptors:
Similarly to , two single-channel feature maps are obtained by channel-wise max-pooling and average-pooling. Then they are concatenated and processed by a convolutional layer to acquire the spatial attention mask :
Finally is broadcast multiplied by and to produce the attentional feature map.
Long Short-Term Memory (LSTM) is capable of modeling long-range dependencies of the input sequential features. The output of LSTM at every timestamp depends not only on the current input but also on the previous inputs. Since the visual feature map is undirectional and the ability of single layer LSTM is limited, we stack two bi-directional LSTMs for context modeling. Given an input , its size is reshaped to before being sent into the stacked Bi-LSTMs. Then the size of output is , where is the number of hidden units.
Scene text recognition mainly relies on two types of representation which are inherent texture features in text images and semantic context dependencies between characters. To model from these two aspects and take both advantages of them, we adopt two kinds of techniques in the decoding phase, namely CTC and Attn. The CTC is responsible for recognition using inherent texture features, hence it takes the visual feature map from the attentional CNNs of the encoder. While Attn mainly focuses on semantic context features, it utilizes the output from the stacked Bi-LSTMs in the encoder. Then these two losses ( and ) are weighted added for back-propagation in training. The total loss are calculated as follows,
is a hyperparameter and set toempirically in our experiments.
There are many advantages of CTC, such as parallel training and parameter-free decoding. For scene text recognition, CTC allows the network to select the most probable character sequence. The CTC output dimension is the class number plus one ‘blank’ symbol denoted by . Given an input sequence feature of length , the probabilities of all possible ways in aligning all possible label sequences are outputted. Knowing that one label sequence
can be represented by different alignments, the conditional probability distribution overis a summation of total probabilities in all possible alignments . Then the probability of a given label conditioned on can be calculated as follows,
where is the probability at time and denotes the set of sequences which are mapped to by . In inference, the most probable labeling for the input sequence is selected:
Since the solution space is exponential to , beam search or greedy decoding is applicable in predicting. We just choose the latter and detailed reasons will be given in the inference section.
The Attention-based sequence prediction (Attn) can also translate feature sequences to character sequences in arbitrary lengths but via a different mechanism. Attn not only takes the visual feature into account but also models output dependencies. Such a model is appealing due to its simplicity and powerfulness in sequence modeling and its ability to capture output dependencies in a recurrent way.
Attn makes use of the encoder output at every decoding step by the attention mechanism. It proceeds iteratively for steps to generate a symbol sequence of length until an End Of Sequence (EOS) symbol. At step , based on the output of the Bi-LSTMs encoder branch , the is predicted using the following formulations,
where and are all learnable parameters and is the hidden state in the LSTM decoder at step . is an alignment model which scores how well the inputs around position and the output at position match.
During inference, there are two predictions outputted by two branches separately. We choose the character sequence from Attn as the final result because the accuracy of the Attn branch is always better than that of CTC branch under any experimental conditions. We leave the problem of merging predictions from two branches as a future research topic. As for the decoding method, different from most previous studies, we simply use the greedy decoding in both branches because it is faster than beam search and the performance differences are negligible in our experiments.
In this section, we conduct extensive experiments on various scene text recognition benchmarks to verify the effectiveness of our model. First the datasets we perform experiments on are introduced in Section IV-A. Then the model implementation details in training and inferring are described in Section IV-B. Finally, we analyze and demonstrate the results in comparing experiments with other state-of-the-art methods and some ablation studies can be found in Section IV-C.
The ReADS is only trained on two public synthetic datasets, MJSynth  and SynthText , without finetuning on other datasets. Then the model is tested on 7 datasets, which are ICDAR 2003 (IC03)  , ICDAR 2013 (IC13) , ICDAR 2015 (IC15) , IIIT5K-Words (IIIT5K) , Street View Text (SVT) , Street View Text Perspective (SVTP) , and CUTE80 (CUTE) .
MJSynth  is a synthetic text dataset. The dataset consists of about 9 million images covering 90k English words and includes the training, validation and testing splits separately. Random transformations and other effects are applied to every word image. All the images in MJSynth are taken for our model training.
SynthText  is also a synthetic text dataset. But unlike MJSynth, it is intended for scene text detection. Words are rendered on whole images and not cropped. We extract all the word regions by the given word bounding boxes for training.
ICDAR 2003 (IC03)  contains 1156 images for training and 1110 images for evaluation. Following Wang et al. , words which are either too short (less than 3 characters) or contain non-alphanumeric characters are ignored. Then the number of images for evaluation reduces to 867 after filtering.
ICDAR 2013 (IC13)  inherits most images from IC03 and adds some new ones. The trainset and testset have 848 and 1095 images separately. After removing words with non-alphanumeric characters, the filtered testset contains 1015 images for our evaluation.
ICDAR 2015 (IC15)  consists of 4468 images for training and 2077 images for evaluation. All the images are acquired by a pair of Google Glasses without deliberate positioning and focusing. Thus this dataset contains many noisy, blurry and irregular text. There are two versions of testset with different image numbers: 1811 and 2077. The former filters images with non-alphanumeric characters, extremely transformation and curved text out, while the later keeps all the images. We test our ReADS on both of the two versions.
IIIT5K-Words (IIIT5K)  contains a trainset of 2000 images and a testset of 3000 images gathered from the Internet. Each image is associated with a 50-word lexicon and a 1,000-word lexicon.
Street View Text (SVT)  consists of 249 images collected from Google Street View. The testset contains 647 cropped samples which are collected from these images. Many of the images are very noisy or have very low resolutions.
Street View Text Perspective (SVTP)  is a collection of 645 images from Google Street View like SVT. But most of the images are more difficult to recognize because of large perspective distortions.
CUTE80 (CUTE)  is a dataset with 288 cropped images and most images in it contains curved text. It is collected from natural scenes.
Iv-B Implementation Details
The network configurations are summarized in Table I. A 34-layer residual network with CBAM is adopted as the visual feature extractor. Except for the first residual block, each residual block is followed by an asymmetric max pooling to keep more horizontal resolution. Two convolutions and a CBAM constitute every residual unit. Following the visual feature extractor, the network splits into two branches. In the CTC branch, the visual feature map is directly sent into the decoder for recognition. In the Attn branch, a stacked RNN of two layer Bi-LSTMs is in front of the attentional LSTM based decoder. Both the Attn and CTC recognize 62 classes, including digits, uppercase and lowercase letters. When evaluating the trained model on benchmarks, we normalize the predictions to case-insensitive and discard punctuation marks . Moreover, no lexicons are applied after predicting.
The proposed model is implemented by Tensorflow and is trained from scratch. The ADADELTA
is applied as the optimizer with a batch size of 64. The total training iterations are about 4.5 epochs and images of every batch are randomly selected from MJSynth and SynthText. The initial learning rate is set to 1.0, then adjusted to 0.1 and 0.01 at the end of the 3rd and 4th epoch. All experiments are conducted on a workstation with a 2.20G Hz Intel(R) Xeon(R) CPU, 256GB RAM and two NVIDIA Tesla V100 GPUs. Some image processes such as random rotation (), elastic deformation, hue, brightness and contrast are applied for data augmentation in training. For images whose height are three times larger than width, we simply rotate the images anticlockwise by . It should be noted that unlike some previous work, we only use greedy search and no input augmentations or results merging is adopted for inference.
Iv-C1 Comparison with State-of-the-art
In this section, we compare the ReADS with several state-of-the-art methods on the benchmarks mentioned in Section 4.1, as shown in Table II. Since our model is trained on synthetic data and does not use the model ensemble and results merging, we only list results acquired in the same condition for a fair comparison. It can be observed that, on both regular and irregular datasets, the ReADS can achieve either highly competitive or state-of-the-art performance. In summary, our method gets five first, one second and one competitive results on a total of seven benchmarks. Especially on IC03 and IC13, we outperform the previous methods by a relatively large margin. Our results on the two regular text datasets, IIIT5K and SVT, are slightly worse than results on IC03 and IC13. Through our observation and analysis, a possible explanation is that data sources of these two are relatively limited. Hence, the semantic context in the Attn branch is more important and the CTC branch does not fully take effect. Results of the ReADS on three irregular text datasets fluctuate by a certain degree. According to our analysis, there are two main reasons. The first is that no other mechanisms are introduced to tackle irregular text except the rectifier. However, the rectifier works in a weakly supervised way (only supervised by the recognition loss), so the effect is not very ideal in some difficult scenarios. The second reason is the data distributions and scales of these datasets are rather different. IC15 has the largest data scale and includes horizontal, Inclined and curved text images. SVTP is a medium scale dataset and consists of only horizontal and Inclined text images. The images in CUTE are the fewest and most of them are curved text images.
Iv-C2 Ablation Study
In this section, we conduct two sets of experiments for ablation studies. The first is to analyze the impact of some modules in the network. The second is to verify the effectiveness of double supervised branches.
Influence of Modules: In this part, we perform several experiments to explore the impact of some modules on the model performance. Two key modules are taken into consideration, the rectifier and the attention mechanisms. All the implementation details are the same as those of the last section and results are demonstrated in Table III. We observe that the rectifier is effective for the irregular text while a little harmful for regular text. Another conclusion is that the attention mechanisms achieve consistent improvement on all the benchmarks except CUTE. We believe that it is due to the small scale of CUTE. There are only 288 images in CUTE, so one more wrong recognized text image brings about reduction on the accuracy. Fortunately, by introducing both two modules, we can take all the advantages from them and obtain better performance than other settings.
Influence of Branches: Meanwhile, we design some experiments to verify the effectiveness of double branch supervising. We compare the ReADS with two implementations which disable one of the two supervised branches and keep other components unchanged. As is shown in Table IV, although the CTC branch network performs worse than the Attn branch network, it still provides effective supervision in the ReADS. The double branch supervised network outperforms any single branch supervised versions on most benchmarks. We observe that, on the irregular text datasets, the Attn branch network achieves comparable results with the ReADS. According to our analysis, it is due to the lack of more effective rectifying modules and the performance of double supervised branches is saturated for irregular text images. Some qualitative cases are also illustrated in Figure 3 to verify the effectiveness of the ReADS.
In this paper, we present a novel approach named ReADS, a rectified attentional double supervised scene text recognizer. It is equipped with decoders of the Attention based sequence recognition (Attn) and the Connectionist Temporal Classification (CTC) for semantic context modeling and inherent texture representation. Combined with two effective modules, the rectifier, and attention mechanisms, the ReADS shows state-of-the-art or highly competitive performance on seven benchmarks compared with previous works. Moreover, the model can be trained end-to-end from scratch and no additional labels are needed except the line-level text. In the future, we plan to explore stronger attention mechanisms especially for irregular text. Merging predictions from CTC and Attn branches is another interesting topic. Besides, since scene text recognition is very relevant to NLP, we will combine these two techniques for better results.
-  (2019) What is wrong with scene text recognition model comparisons? dataset and model analysis. In The IEEE International Conference on Computer Vision (ICCV), Cited by: TABLE II.
-  (2019) ASTER: an attentional scene text recognizer with flexible rectification. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (9), pp. 2035–2048. Cited by: §II-B, TABLE II.
-  (2018) Rosetta: large scale system for text detection and recognition in images. In International Conference on Knowledge Discovery & Data Mining (SIGKDD), Cited by: §II-A.
-  (2017) Focusing attention: towards accurate text recognition in natural images. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §II-B.
AON: towards arbitrarily-oriented text recognition.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B, TABLE II.
-  (2017) Reading scene text with attention convolutional sequence modeling. Neurocomputing. Cited by: §I, TABLE II.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks.
The 23rd International Conference on Machine Learning (ICML), Cited by: §II-A.
-  (2016) Synthetic data for text localisation in natural images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §I, §IV-A, §IV-A.
-  (1997) Long short-term memory. Neural Computation 9 (8), pp. 1735–1780. Cited by: §I.
-  (2014) Synthetic data and artificial neural networks for natural scene text recognition. CoRR abs/1406.2227. Cited by: §I, TABLE II, §IV-A, §IV-A.
-  (2015) Spatial transformer networks. In Advances in Neural Information Processing Systems, pp. 2017–2025. Cited by: §I.
-  (2016) Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §III-B.
-  (2015-08) ICDAR 2015 competition on robust reading. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pp. 1156–1160. Cited by: §IV-A, §IV-A.
-  (2013) ICDAR 2013 robust reading competition. In 12th International Conference on Document Analysis and Recognition (ICDAR), pp. 1484–1493. Cited by: §IV-A, §IV-A.
Recursive recurrent nets with attention modeling for ocr in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B.
-  (2018) Show, attend and read: A simple and strong baseline for irregular text recognition. CoRR abs/1811.00751. Cited by: §II-B.
-  (2019) Scene text recognition from two-dimensional perspective. In AAAI, Vol. 33, pp. 8714–8721. Cited by: TABLE II.
-  (2019) SAFE: scale aware feature encoder for scene text recognition. In ACCV, Cited by: TABLE II.
-  (2016) STAR-net: a spatial attention residue network for scene text recognition. In The British Machine Vision Conference (BMVC), Cited by: §I, §II-A, TABLE II.
-  (2018) Char-net: a character-aware neural network for distorted scene text recognition. In AAAI, Cited by: TABLE II.
-  (2005) ICDAR 2003 Robust Reading Competitions: Entries, Results and Future Directions. International Journal of Document Analysis and Recognition 7, pp. 105–122. Cited by: §IV-A, §IV-A.
-  (2019) A multi-object rectified attention network for scene text recognition. CoRR abs/1901.03003. Cited by: §II-B, TABLE II.
-  (2012) Scene Text Recognition using Higher Order Language Priors. In British Machine Vision Conference (BMVC), Cited by: §IV-A, §IV-A.
-  (2012) Real-time scene text localization and recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II.
-  (2020) FACLSTM: convlstm with focused attention for scene text recognition. Science China Information Sciences 63 (2), pp. 120103. Cited by: TABLE II.
-  (2013) Recognizing text with perspective distortion in natural scenes. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §IV-A, §IV-A.
-  (2014) A robust arbitrary text detection system for natural scene images. Expert Systems with Applications 41 (18), pp. 8027 – 8048. Cited by: §IV-A, §IV-A.
-  (2017) An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 39 (11), pp. 2298–2304. Cited by: §I, §II-A, TABLE II.
-  (2016) Robust scene text recognition with automatic rectification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II-B, TABLE II.
-  (2015) Very deep convolutional networks for large-scale image recognition. pp. 1–14. Cited by: §III-B.
-  (2015) Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §III-B.
-  (2017) Attention is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008. Cited by: §II-B.
-  (2019) 2D-ctc for scene text recognition. CoRR. Cited by: TABLE II.
-  (2011) End-to-end scene text recognition. In International Conference on Computer Vision (ICCV), Cited by: §II.
-  (2011) End-to-end scene text recognition. In 2011 International Conference on Computer Vision (ICCV), pp. 1457–1464. Cited by: §IV-A, §IV-A, §IV-A.
-  (2010) Word spotting in the wild. In ECCV, Cited by: §II.
-  (2019) A simple and robust convolutional-attention network for irregular text recognition. CoRR abs/1904.01375. Cited by: §II-B.
-  (1989) Thin-plate splines and the decompositions of deformations. IEEE Transactions on Pattern Analysis and Machine Intelligence 11 (6). Cited by: §III-A.
-  (2018) CBAM: convolutional block attention module. In The European Conference on Computer Vision (ECCV), Cited by: §III-B1.
-  (2019) Symmetry-constrained rectification network for scene text recognition. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §II-B.
-  (2014) Strokelets: a learned multi-scale representation for scene text recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §II.
-  (2019) Text recognition using local correlation. Cited by: TABLE II.
-  (2016) Fully convolutional recurrent network for handwritten chinese text recognition. In The 23rd International Conference on Pattern Recognition (ICPR), Cited by: §II-A.
-  (2012) Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701. Cited by: §IV-B.
-  (2019) Esir: end-to-end scene text recognition via iterative image rectification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: TABLE II.