Scene text recognition has received considerable attention from the community of computer vision since text is an essential way to convey information and knowledge. Due to the challenges posed by poor image qualities (e.g., low resolution, blur, and uneven illumination) and various text appearances (e.g., size, fonts, colors, directions, perspective view as well as complex background), as shown in Fig. 1, though many efforts have been made in past decades, scene text recognition is still an unsolved task.
Inspired by speech recognition and machine translation, most of recent state-of-the-art approaches regard scene text recognition as a sequence-to-sequence prediction problem and widely adopt techniques like LSTM  and attention mechanism [1, 5] in their sequential transcription module. However, the LSTM used in these recognizers is the fully-connected-LSTM (FC-LSTM) that only takes stream signals like sentences or audio as inputs and connects them in a fully connected way, while scene text recognition generates sequential outputs from 2-D images. To adapt FC-LSTM to scene text recognition, the most straightforward way is pooling 2-D feature maps to a height of one or flattening them into 1-D sequential feature vectors [8, 3, 4, 21, 22], as shown in Fig 2(a). Unfortunately, such operations could severely disrupt the valuable spatial correlation relationships among pixels, which is essential to computer vision tasks, especially to scene text recognition, where the structures of strokes are the key factors to discriminate characters. To retain such important spatial and structural information, researchers have also explored other alternative solutions. For example, STN-OCR 
directly performed sequential prediction on 2-D feature maps with a fixed number of softmax classifiers; CA-FCN generated character-level confidence maps with a fully convolutional network, as shown in Fig. 2(b). However, compared with LSTM, these solutions often introduce additional parameters or post processing steps.
In this paper, we propose to address the issue of scene text recognition from the perspective of spatiotemporal prediction, where the spatial correlation information is taken into account when performing sequential prediction with LSTM. The ConvLSTM proposed by Shi et al.  for precipitation nowcasting provides some insights on how to achieve this. In ConvLSTM, all of the fully connected operations are replaced by convolutional ones, so input feature maps are allowed to keep their 2-D shape when being fed into the ConvLSTM. Given this advantage, for the first time, we introduce ConvLSTM to scene text recognition and apply it in the sequential transcription module of our proposed recognizer.
However, in existing models, both FC-LSTM and ConvLSTM are used only for frame-level prediction and are incapable of producing sequential outputs from one single input image unless the Connectionist Temporal Classification (CTC) [7, 8, 21] or attention mechanism [3, 4, 22, 27] is incorporated. To perform sequential prediction and, meanwhile, provide the model spatial awareness, we further improve ConvLSTM by embedding the attention mechanism into the structure. Notably, different from the existing attention-LSTM-based recognizers, where the attention mechanism and FC-LSTM are combined in a fully connected way, we properly integrate the attention mechanism into ConvLSTM with the convolutional operations. Moreover, as ConvLSTM extends 2-D operations into 3-D, the costs of computation and memory increase significantly. To achieve high efficiency, inspired by Liu et al. , we propose to assemble a bottleneck gate at the beginning of the proposed attention-equipped ConvLSTM, so that the internal feature map channels can be reduced.
Last but not the least, since existing attention-based recognizers often suffer from the ‘attention drift’ problem , i.e.
, they fail to align target outputs to proper feature areas, we propose to learn additional character center masks with a second decoder branch in the encoder-decoder feature extraction stage to assist the proposed network to focus attention on right feature areas. The experimental results conducted on benchmark datasets demonstrate that our proposed recognizer is able to achieve comparable performance with the state-of-the-art approaches on regular, low-resolution and noisy text and outperforms other methods significantly on the more challenging curved text.
The contributions made in this work are summarized as follows. (1) We propose to handle the scene text recognition problem from a spatiotemporal prediction perspective and for the first time introduce ConvLSTM to this application. (2) We design a ConvLSTM-based sequential transcription module, where the attention mechanism is harmoniously embedded into ConvLSTM with convolutional operations, and the bottleneck gate is assembled at the beginning of ConvLSTM to retain its efficiency. (3) We propose to learn additional character center masks to help the proposed network focus attention on the center of characters.
2 Related works
The existing scene text recognizers can be grouped into two categories, i.e.
, the ones utilizing traditional techniques and the ones based on deep learning techniques. Methods belonging to the first category were mainly proposed before 2015, and follow a bottom-up routine,i.e., detecting and recognizing individual characters first, followed by word formation. Ye et al.  provided a comprehensive survey for these methods. By contrast, the deep learning-based recognizers depend on end-to-end trainable deep networks, where feature extraction and sequential translation are integrated into one unified framework. According to literature, the deep learning-based recognizers are now the dominant solutions to scene text recognition, and surpass traditional ones by large margins. Therefore, in this section, we only review recognizers applying deep learning techniques, along with ConvLSTM and related variants.
Methods based on LSTM: LSTM is widely used in the existing state-of-the-art recognizers for three purposes, i.e., producing frame-level predictions required by the subsequent sequential transcription module [8, 21], encoding sequential features with considering historical information [22, 23], and directly generating sequential predictions when cooperating with the attention mechanism [3, 4, 27, 14, 23]. For example, CRNN proposed by Shi et al.  was composed of three parts, i.e., the convolution module used to extract features from input images, a bi-LSTM layer built to make predictions for individual frames, and a CTC-based sequential transcription component utilized to infer sequential outputs from frame-level predictions. Shi et al. 
employed a bi-LSTM layer in their RARE to extract sequential feature vectors from input feature maps, followed by feeding these vectors into an attention-Gated Recurrent Unit (GRU) module to generate label sequences. A highlight of RARE was its usage of Spatial Transformer Network (STN), which was responsible for rectifying images containing irregular texts and was widely adopted by subsequent recognizers like STN-OCR  and ASTER . Afterwards, RARE was extended to ASTER  by modifying the architecture of rectification network. Note that, LSTM was used for both feature encoding and sequential transcription in ASTER. Lee et al.  combined a recursive CNN with a recurrent CNN in their R AM to capture long-term dependencies when extracting features from raw images, and then fed these features to an attention-RNN network for sequential transcription. Gao et al. [7, 8] designed two models to compare the performance of CNN and LSTM in terms of sequential feature encoding. According to their experiments, features extracted by LSTM were more powerful than those extracted by CNN. Cheng et al. [3, 4] combined LSTM with an attention mechanism in the sequential transcription module of their FAN and AON recognizers, but they criticized that the existing attention-based models often failed to align attention to right feature areas when performing prediction. Therefore, a focusing network was assembled in their FAN  to tackle this problem. AON  was specially designed for irregular text recognition. In this work, features were extracted from four directions, and then combined and filtered with a filter gate. Wojna et al. 
utilized an attention-equipped LSTM to localize and recognize text from street view images. Their model was given location awareness by incorporating one-hot encoded spatial coordinates into the LSTM.
Note that, the LSTM used in all the methods mentioned above refers to traditional FC-LSTM, so the 2-D feature maps have to be mapped into 1-D space in order to adapt to the LSTM layers, and the attention mechanism has to be incorporated in a fully connected way. This severely damages the spatial and structural information of input images, which is essential to computer vision tasks such as scene text recognition.
Methods without LSTM: At the beginning of the deep learning era, a group of Deep CNN (DCNN) recognizers [11, 1] were well developed and made breakthrough over traditional recognizers. In these models, CNN together with softmax classifier were widely used for character or word classification. However, with the development of LSTM-based recognizers, the DCNN ones were quickly and significantly surpassed. Recently, some researchers argued that LSTM-based models were hard to train  and not able to achieve good performance on non-horizontal text , so explorations on models without LSTM started again. For instance, STN-OCR  utilized fully connected layers and a fixed number of softmax classifiers for sequential prediction; SqueezedText  employed a binary convolutional encoder-decoder network to generate salience maps for individual characters and then exploited a GRU-based bi-RNN for further correction; Liao et al.  proposed to address the scene text recognition issue from a 2-D perspective with a CA-FCN model, so that the spatial information could be taken into account when performing prediction. Concretely, they designed a FCN structure equipped with a character attention module to produce pixel-level confidence maps for target characters, and then fed these maps into a word formation module to generate word-level outputs.
Different from those LSTM-based approaches, recognizers without LSTM can better leverage the spatial information, but they also unavoidably introduce additional parameters or post processing steps in order to produce sequential outputs, such as the multiple classifiers used by STN-OCR  and the word formation module designed in .
ConvLSTM and Related Variants: As explained in , the main drawback of traditional FC-LSTM was its usage of full connections in the input-to-state and state-to-state transitions, which resulted in the neglect of spatial information. To retain such important information, ConvLSTM, proposed by Shi et al. , replaced all of the full connections of traditional FC-LSTM with convolutional operations, and extended the 2-D features and states into 3-D, as shown in Fig. 3. Their experimental results demonstrated the superiority of ConvLSTM over traditional FC-LSTM. Thereafter, some variants of ConvLSTM have been developed for action recognition , object detection in video , and gesture recognition [29, 30] etc. For example, Zhu et al.  combined ConvLSTM with the 3-D convolution in a multimodal model, and achieved promising gesture recognition performance. Li et al.  designed a motion-based attention mechanism and combined it with ConvLSTM in their VideoLSTM, which is proposed for action recognition in videos.
In our work, aiming to better consider the spatial and structural information of input images when performing sequential prediction with LSTM, for the first time, we propose an attention-equipped ConvLSTM structure in the sequential transcription module, and further design a focused attention module to help learn more accurate alignment between predicted characters and corresponding feature areas.
As illustrated in Fig. 4, our proposed FACLSTM, i.e., Focused Attention ConvLSTM, consists of two components, i.e., the CNN-based feature extraction module and the ConvLSTM-based sequential transcription module. The feature extraction module is an encoder-decoder structure that takes VGG-16 as the backbone, while the sequential transcription module is a combination of ConvLSTM and attention mechanism. More details are presented as follows.
3.1 CNN-based Feature Extraction
Backbone: Similar to Liao’s work , we take VGG-16 as the encoder of our feature extraction module, and remove the fully connected layers and pooling layers from the last two encoding stages. We also assemble two deformable convolutional layers  at stage-4 and stage-5 of the decoder given their flexible receptive fields. However, compared with Liao’s network , the resolution of final feature maps is restored to a smaller size of in our FACLSTM, instead of the used in , considering the memory and computation cost. Here, , and denote the width, height and channels of feature maps, respectively. In addition, we remove their character attention module set in the encoder stage, and meanwhile, design a focused attention module in the higher-level decoder stage so that more abstract and powerful character center masks can be extracted.
Focused Attention Module: As pointed out in , current attention-based models suffer from the ‘attention drift’ problem, i.e., they fail to obtain accurate alignment between target characters and related feature areas, especially in complicated and low-quality images. To tackle this problem, in the feature extraction module of the proposed FACLSTM, we assemble two decoder branches, of which one is used as normal for feature extraction and another is designed to learn additional character center masks. These masks are expected to guide the subsequent attention module regarding where to focus. Obviously, for each timestep, the attention should be focused on the center of certain character. Moreover, these masks can also help to enhance foreground pixels and suppress background pixels.
In other works [7, 8, 16], the feature maps and attention maps are always combined with the element-wise multiplication in the way of . However, in our experiments we find that directly concatenating feature maps and character center masks can achieve better performance, since the subsequent attention-based module prefers to learn patterns from and directly, rather than from their fused results. Therefore, direct concatenation is used in our FACLSTM.
3.2 Sequential Transcription Module
As shown in Fig. 4, our sequential transcription module starts with an attention-equipped ConvLSTM, by which groups of feature maps with the size of are generated. Here, is the predefined maximal string length. Afterwards, a convolutional layer is applied to reduce the feature map channels, followed by a fully connected layer and a softmax classifier that are employed to sequentially predict characters. Details of proposed sequential transcription module are presented below.
ConvLSTM: The structure of the traditional FC-LSTM  is illustrated in Fig. 3(left), and related key formulations can be expressed as Eq. 1, where is the Hadamard product (i.e., element-wise multiplication),
denotes the activation function of input gate, output gate and forget gate , and , and represent input features, cell states and cell outputs, respectively.
As we can see, FC-LSTM takes 1-D sequential feature vectors as input, and calculates both the input-to-state and state-to-state transactions in a fully connected manner. Therefore, when applying it to computer vision tasks, the 2-D feature maps have to be mapped into 1-D space, during which the spatial correlation relationships among pixels are badly damaged.
To take advantage of such valuable spatial and structural information in computer vision tasks, Shi et al.  proposed ConvLSTM by incorporating convolutional structures into LSTM. As shown in Fig. 3(right), all input features, gates, cell states and cell outputs are 3-D in ConvLSTM, and all of the input-to-state and state-to-state transactions are performed with the convolutional operations, instead of the fully connected ones. Thus, the key formulations of ConvLSTM can be written as Eq. 2, where denotes the convolutional operation.
Proposed Attention-equipped ConvLSTM: The attention mechanism has achieved excellent performance in sequential prediction tasks, such as machine translation , speech recognition , as well as scene text recognition [3, 4, 27, 14, 23]. Especially, in the field of scene text recognition, it has been widely combined with FC-LSTM or GRU to produce more accurate predictions. On the other hand, LSTM is used only for frame-level prediction in the existing works and is seldom utilized for producing sequential outputs from one single input image unless when combined with the CTC or attention mechanism.
Therefore, in this work, to adapt ConvLSTM to scene text recognition and, meanwhile, provide the proposed network location awareness, we incorporate the attention mechanism into ConvLSTM by weighting the input feature maps with attention scores derived from the cell states and cell outputs obtained at the previous timestep, as illustrated in Fig. 5. In addition, to retain the efficiency of the proposed network, an additional bottleneck gate is assembled before the original input gate, forget gate and output gate to reduce the internal feature map channels.
Once the cell outputs are obtained from the proposed attention-equipped ConvLSTM, a convolutional layer is applied to map them to and , which is also used to improve model’s efficiency, just like the bottleneck gate does. Afterwards, a fully connected layer and a softmax classifier are designed to generate the final sequential outputs from , where is from the predefined charset. Compared with STN-OCR , where multiple fully connected layers and multiple softmax classifiers are assembled for sequential transcription, in our FACLSTM, only one single fully connected layer and one softmax classifier are employed and shared by groups of feature maps.
Loss function: The objective function of our proposed FACLSTM consists of two parts, i.e., the sequential prediction loss and the mask loss , as formulated in Eq. 5, where , , and are the ground truth masks, predicted masks, smoothed ground truth strings and predicted sequential outputs, respectively. is the coefficient used to balance the importance of the sequential prediction loss and the mask loss, and is set to in our experiments. Additionally, the label smoothing method proposed by Szegedy et al.  is able to help regularize the proposed model. Therefore, given the one-hot encoded ground truth , we convert it to the smoothed version with Eq. 6. Moreover, for the ground truth masks , we set the value of their foreground pixels (center of characters) and background pixels to 1 and 0, respectively. Thus, the mask loss is calculated in the way of Eq. 7.
Generation of Ground Truth: Obviously, to optimize the proposed network, ground truth of character center masks is required. Assuming is the bounding box of individual characters, we use the same method as that in  to calculate the ground truth of the corresponding mask , as shown in Eq. 8.
Note that, the shrink ratio is set to 0.25 in our experiments, instead of 0.5 used in .
We train the proposed FACLSTM network with 7 million synthetic images from SynthText dataset  without fine-tuning on individual real-word datasets, and evaluate the corresponding performance on three widely used benchmarks, including the regular text dataset IIIT5K, low-resolution and noisy text dataset SVT, and curved text dataset CUTE.
SynthText is proposed by Gupta et al.  for scene text detection. The original dataset is composed of 800,000 scene text images, each with multiple word instances. Texts in this dataset are rendered in different styles, and annotated with character-level bounding boxes. Overall, about 7 million text images are cropped for scene text recognition.
SVT is a very challenging dataset collected by Wang et al.  from the Google Street View. Totally, 647 text images with low-resolution and noise are included.
CUTE is released by Risnumawan et al. . There are only 288 word images in this dataset, but most of them are seriously curved. Therefore, compared with other datasets, CUTE is more challenging.
4.2 Implementation Details
In our experiments, all of the input images are scaled to a size of with aspect ratio preserved. The maximal string length is set to 20, including one START token and one EOF token. This means up to 18 real characters are allowed within individual words. Our charset is composed of 39 characters, i.e.
, 26 alphabet letters, 10 digits, 1 START token, 1 EOS token and 1 special token for any other symbols. The Adam optimizer with an initial learning rate of 1e-4 is employed in our work to optimize the proposed network. Totally, the proposed FACLSTM is trained for five epochs, with learning rates of 1e-4, 1e-4, 5e-5, 1e-5 and 1e-6, respectively. Moreover, the kernel size and channels (in Fig. 5) of the convolutional operations in Eqs. 3 and 4 are set to
and 256, respectively. Finally, the proposed network is implemented using the Tensorflow framework.
4.3 Experimental Results
We evaluate the performance of our proposed FACLSTM on the aforementioned three benchmark datasets, and compare it with those of the state-of-the-art approaches. Table 1 presents the details of the comparison results. Note that, in this table, CA-FCN  and SqueezedText  are the two latest recognizers recently published in AAAI2019.
|Method||Usage of LSTM||IIIT5K_None||IIIT5K_50||IIIT5K_1k||SVT||CUTE|
|(Gao et al.)* ||FC-LSTM||83.6||99.1||97.2||83.9||-|
|SqueezedText (binary) ||FC-LSTM||86.6||96.9||94.3||-||-|
|SqueezedText (full-precision) ||FC-LSTM||87.0||97.0||94.1||-||-|
|(Gao et al.)* ||No||81.8||99.1||97.9||82.7||-|
Comparison with Methods based on the Traditional FC-LSTM: As previously introduced, traditional FC-LSTM is widely used in the existing scene text recognizers. Among the methods listed in Table 1, RARE , AON  and FAN  combined FC-LSTM with the attention mechanism in the fully connected way when performing sequential transcription, while CRNN , RAM , Gao’s model  and SqueezedText  utilized FC-LSTM for frame-level prediction, sequential feature encoding or other purposes.
As shown in Table 1, our proposed FACLSTM outperforms these methods by large margins on the regular text dataset IIIT5K (90.5% vs 87.4%) and curved text dataset CUTE (83.33% and 76.8%) when no lexicon is used. Our performance also ranks the first and the second on the IIIT5K dataset when 1k-word lexicon and 50-word lexicon are used, respectively.
As for the low-resolution and noisy dataset SVT, our proposed FACLSTM also achieves competitive performance. Readers should keep in mind that both Gao’s methods [7, 8] were evaluated on an incomplete SVT dataset that removed words containing non-alphanumeric characters or with a length smaller than three. On the other hand, apart from the 7 million training images from SynthText, the recognizers named as AON  and FAN  also employed additional 4 million images provided by Jaderberg et al.  for their training. These 4 million images are generated with a 50k-word lexicon that covers all the test words of ICDAR and SVT datasets, and blended with word images randomly-sampled from these two datasets. Thus, the recognition performance on SVT dataset would benefit largely from the usage of these 4 million images, which can also be seen from Liao’s work . Unfortunately, Jaderberg et al.  did not provide character-level bounding boxes, so we cannot employ their dataset to train our proposed network, unless we remove the focused attention module from our FACLSTM or, as Liao et al.  did, generate new synthetic images with their 50k-word lexicon. On the other hand, since both synthetic datasets contain large amounts of images, our hardware is insufficient.
Comparison with Non-LSTM based Methods: Considering the limitations of the traditional FC-LSTM on neglecting spatial and structural information and slow training convergence etc, CA-FCN , Gao’s model  and STN-OCR  have also explored other non-LSTM solutions. Especially, CA-FCN  also addressed the recognition issue from the 2-D perspective by utilizing a FCN structure.
From Table 1, we can see that the accuracy of our proposed FACLSTM is 1.5% lower than that of the best recognizer, i.e. CA-FCN , on the regular text dataset IIIT5K. However, on the more challenging curved text dataset CUTE, we achieve an accuracy of 83.3%, which is 5.2% higher than that of CA-FCN . As for the low-resolution and noisy dataset SVT, our FACLSTM performs slightly better than CA-FCN  with an accuracy of 82.2% (vs. 82.1% of CA-FCN ). Note that, CA-FCN  is not an end-to-end trainable system because in order to infer the final sequential outputs from the pixel-level predictions generated by their network, an empirical rule-based word formation module is required. By contrast, our FACLSTM is able to directly produce the final sequential outputs via the proposed ConvLSTM-based sequential transcription module.
Effectiveness of the Proposed Focused Attention Module and ConvLSTM-based Sequential Transcription Module: Furthermore, to highlight the effectiveness of our proposed focused attention module and ConvLSTM-based sequential transcription module, we compare the performance of our proposed FACLSTM with that of the following two baseline models:
FLSTM_base1, which shares the same feature extraction module with our proposed FACLSTM, but removes the focused attention module. Besides, the sequential transcription module used in this model is the traditional attention-based FC-LSTM network, just as the one used in AON , FAN  and both Gao’s models [7, 8].
FAFLSTM_base2, which is built upon FLSTM_base1, but with the proposed focused attention module applied.
Apparently, from the comparison of FLSTM_base1, FAFLSTM_base2 and our proposed FACLSTM, we can see that the recognition accuracies on IIIT5K, SVT and CUTE datasets are elevated by 14.1%, 19.5% and 8.4%, respectively when the proposed focused attention module is assembled. When the traditional attention-based FC-LSTM is replaced by our proposed ConvLSTM-based sequential transcription module, further 2.7%, 3.6% and 7.6% improvements are achieved. Therefore, we can say that both of the proposed modules are effective.
In summary, on the regular text dataset, our proposed FACLSTM outperforms all of listed FC-LSTM-based and non-LSTM-based recognizers, except CA-FCN, but on the more challenging curved text dataset, our FACLSTM surpasses all of the listed methods significantly with an accuracy of 83.3%, including CA-FCN (78.1%). Moreover, the comparisons with other two baseline models demonstrate the effectiveness of our proposed focused attention module and ConvLSTM-based sequential transcription module. Finally, we also give the visualization results of the predicted masks and the attention shift procedure, as shown in Fig. 6.
Scene text recognition has been treated as a sequence-to-sequence prediction problem for quite a long time, and traditional FC-LSTM is widely used in current state-of-the-art recognizers. In this work, we have demonstrated that scene text recognition is actually a spatiotemporal prediction problem and we have proposed to tackle this problem from the spatiotemporal perspective. Toward this end, we have presented an effective scene text recognizer named FACLSTM, where ConvLSTM was applied and improved by integrating the attention mechanism in the sequential transcription module, and a focused attention module has been designed at the encoder-decoder feature extraction stage. Experimental results have revealed that, our proposed FACLSTM is able to handle both regular and irregular (low-resolution, noisy and curved) text well. Especially for the curved text, our proposed FACLSTM has outperformed other advanced approaches by large margins. Thus, we can conclude that ConvLSTM is more effective in scene text recognition than the widely used FC-LSTM since the valuable spatial and structural information can be better leveraged when performing sequential prediction with ConvLSTM.
-  D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473, 2014.
-  C. Bartz, H. Yang, and C. Meinel. Stn-ocr: a single neural network for text detection and recognition. CoRR, arXiv preprint arXiv: 1707.08831v1, 2017.
-  Z. Cheng, F. Bai, Y. Xu, and G. Zheng. Focusing attention: towards accurate text recognition in natural images. In IEEE International Conference on Computer Vision, pages 5086–5094, 2017.
Z. Cheng, Y. Xu, F. Bai, Y. Niu, S. Pu, and S. Zhou.
Aon: towards arbitrarily-oriented text recognition.
International Conference on Computer Vision and Pattern Recognition, pages 5571–5579, 2018.
-  J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech recognition. CoRR, abs/1506.07503, 2015.
-  J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In IEEE International Conference on Computer Vision, pages 764–773, 2017.
-  Y. Gao, Y. Chen, J. Wang, and H. Lu. Reading scene text with attention convolutional sequence modeling. CoRR, arXiv preprint arXiv: 1709.04303v1, 2017.
-  Y. Gao, Y. Chen, J. Wang, M. Tang, and H. Lu. Dense chained attention network for scene text recognition. In International Conference on Image Processing, pages 679–683, 2018.
-  A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic data for text localization in natural images. In International Conference on Computer Vision and Pattern Recognition, pages 2315–2324, 2016.
-  S. Hochreiter and J. Schmihuber. Long short-term memory. Neural Computation, 9:1735–1780, 1997.
-  M. Jaderberg, A.Vedaldi, and A. Zisserman. Deep features for text spotting. In ECCV, 2014.
-  M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. CoRR, arXiv preprint arxiv:1412.1842, 2014.
-  M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. CoRR, abs/1506.02025, 2015.
C. Lee and S. Osindero.
Recursive recurrent nets with attention modeling for ocr in the wild.In International Conference on Computer Vision and Pattern Recognition, pages 2231–2239, 2016.
-  Z. Li, K. Gavrilyuk, E. Gavves, M. Jain, and C. G. Snoek. Videolstm convolves, attends and flows for action recognition. Computer Vision and Image Understanding, 166:41–50, 2018.
-  M. Liao, J. Zhang, Z. Wan, F. Xie, J. Liang, P. Lyu, C. Yao, and X. Bai. Scene text recognition from two-dimensional perspective. In AAAI, 2019.
-  M. Liu and M. Zhu. Mobile video object detection with temporally-aware feature maps. In International Conference on Computer Vision and Pattern Recognition, pages 5571–5579, 2018.
-  Z. Liu, Y. Li, F. Ren, W. Goh, and H. Yu. Squeezedtext: a real-time scene text recognition by binary convolutional encoder-decoder network. In AAAI, pages 7194–7201, 2018.
-  A. Mishra, K. Alahari, and C. Jawahar. Top-down and bottom-up cues for scene text recognition. In International Conference on Computer Vision and Pattern Recognition, pages 2687–2694, 2012.
-  A. Risnumawan, P. Shivakumara, C. Chan, and C. Tan. A robust arbitrary text detection system for natural scene images. Expert Systems with Applications, 41:8027–8048, 2014.
B. Shi, X. Bai, and C. Yao.
An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition.IEEE Transaction on Pattern Analysis and Machine Intelligence, 39(11):2298–2304, 2017.
-  B. Shi, X. Wang, P. Lyu, C. Yao, and X. Bai. Robust scene text recognition with automatic rectification. In International Conference on Computer Vision and Pattern Recognition, pages 4168–4176, 2016.
-  B. Shi, M. Yang, X. Wang, P. Lyu, X. Bai, and C. Yao. Aster: an attention scene text recognizer with flexible rectification. IEEE Transaction on Pattern Analysis and Machine Intelligence (TPAMI), 31(11):855–868, 2018.
X. Shi, Z. Chen, H. Wang, D. Yeung, W. Wong, and W. Woo.
Convolutional lstm network: a machine learning approach for precipitation nowcasting.In NIPS, 2015.
-  C. Szegedy, V. Vanhoucke, S. Ioffe, and Z. Wojna. Rethinking the inception architecture for computer vision. In International Conference on Computer Vision and Pattern Recognition, pages 2818–2826, 2016.
-  K. Wang, B. Babenko, and S. Belongie. End-to-end scene text recognition. In IEEE International Conference on Computer Vision, pages 1457–1464, 2011.
-  Z. Wojna, A. Gorban, D. Lee, K. Murphy, Q. Yu, Y. Li, and J. Ibarz. Attention-based extraction of structured information from street view imagery. In International Conference on Document Analysis and Recognition, pages 844–850, 2017.
-  Q. Ye and D. Doermann. Text detection and recognition in imagery: a survey. IEEE Transaction on Pattern Analysis and Machine Intelligence, 37(7):1480–1500, 2015.
-  L. Zhang, G. Zhu, L. Mei, P. Shen, S. Shah, and M. Bennamoun. Attention in convolutional lstm for gesture recognition. In NIPS, 2018.
-  G. Zhu, L. Zhang, P. Shen, and J. Song. Multimodal gesture recognition using 3-d convolution and convolutional lstm. IEEE Access, 5:4517–4524, 2017.