EATEN: Entity-aware Attention for Single Shot Visual Text Extraction

by   He Guo, et al.
Baidu, Inc.

Extracting entity from images is a crucial part of many OCR applications, such as entity recognition of cards, invoices, and receipts. Most of the existing works employ classical detection and recognition paradigm. This paper proposes an Entity-aware Attention Text Extraction Network called EATEN, which is an end-to-end trainable system to extract the entities without any post-processing. In the proposed framework, each entity is parsed by its corresponding entity-aware decoder, respectively. Moreover, we innovatively introduce a state transition mechanism which further improves the robustness of entity extraction. In consideration of the absence of public benchmarks, we construct a dataset of almost 0.6 million images in three real-world scenarios (train ticket, passport and business card), which is publicly available at To the best of our knowledge, EATEN is the first single shot method to extract entities from images. Extensive experiments on these benchmarks demonstrate the state-of-the-art performance of EATEN.


page 3

page 4

page 6


Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution

Visual information extraction (VIE) has attracted considerable attention...

Entity Extraction with Knowledge from Web Scale Corpora

Entity extraction is an important task in text mining and natural langua...

OSLAT: Open Set Label Attention Transformer for Medical Entity Span Extraction

Identifying spans in medical texts that correspond to medical entities i...

Effective Cascade Dual-Decoder Model for Joint Entity and Relation Extraction

Extracting relational triples from texts is a fundamental task in knowle...

Scientific and Technological Text Knowledge Extraction Method of based on Word Mixing and GRU

The knowledge extraction task is to extract triple relations (head entit...

Entity Tagging: Extracting Entities in Text Without Mention Supervision

Detection and disambiguation of all entities in text is a crucial task f...

Boosting Entity Mention Detection for Targetted Twitter Streams with Global Contextual Embeddings

Microblogging sites, like Twitter, have emerged as ubiquitous sources of...

Code Repositories


EATEN: Entity-aware Attention for Single Shot Visual Text Extraction

view repo


This repo contains tensorflow implemenation of the paper EATEN: Entity-aware Attention for Single Shot Visual Text Extraction

view repo

I Introduction

Recently, scene text detection and recognition, two fundamental tasks in the field of computer vision, have become increasingly popular due to their wide applications such as scene text understanding [15], image and video retrieval [4]. Among these applications, extracting Entity of Interest (EoI) is one of the most challenging and practical problems, which needs to identify texts that belong to certain entities. Taking passport (Fig. 3) for example, there are many entities in the image, such as Country, Name, Birthday and so forth. In practical applications, we only need to output the texts for some predefined entities, e.g. “China” or “USA” for the entity “Country”, “Jack” or “Rose” for the entity “Name”. Previous approaches [7, 16, 6] mainly adopt two steps, in which text information is extracted firstly via OCR (Optical Character Recognition), and then EoIs are extracted by handcrafted rules or layout analysis.

Nevertheless, in detection and recognition paradigm, engineers have to develop post-processing steps, which are handcrafted rules to determine which part of the recognized text belongs to the predefined EoIs. It’s usually the post-processing steps, rather than the ability of detection and recognition, restraints the performance of EoIs extraction. For example, if the positions of entities have a slight offset to the standard positions, inaccurate entities will be extracted due to sensitive template representation.

Fig. 1: EATEN: an end-to-end trainable framework for extracting EoIs from images. We use a CNN to extract high-level visual representations, denoted as

, and an entity-aware attention network to decode all the EoIs. We pad the ground truth of every decoder to a fixed length with

EOS characters. is the initial state of the 1st entity-aware decoder. The red dash line shows the process of state transition. Taking and as example, the State transition can be expressed as .

In this paper, a single shot Entity-aware Attention Text Extraction Network (EATEN) is proposed to extract EoIs from images within a single neural network. As shown in Fig.

1, we use a CNN-based feature extractor to extract feature maps from original image. Then we design an entity-aware attention network, which is composed of multiple entity-aware decoders, initial state warm up and state transition between decoders, to capture all entities in the image. Compared with traditional methods, EATEN is an end-to-end trainable framework instead of multi-stage procedures. EATEN is able to cover most of the corner cases with arbitrary shapes, projective/affine transformations, position drift without any correction due to the introduction of spatial attention mechanism. Since there are rarely large datasets available for visual EoI extraction, we construct a new dataset with about 0.6 million real and synthetic images, which includes train tickets, passports, and business cards. The proportions of Chinese, English and digital EoIs in the dataset are respectively. EATEN shows significant improvements compared to other related methods, and it achieves , , and FPS respectively in three scenarios (i.e., train ticket, passport, and business card) on a Tesla P40 GPU. The results show that EATEN can deal with EoIs in Chinese as well as those in English and digits.

In summary, our approach has the following three main contributions: (1) we propose an end-to-end trainable system to extract EoIs from images in a unified framework. (2) we design an entity-aware attention network with multiple decoders and state transition between contiguous decoders so that the EoIs can be quickly located and extracted without any complicated post-process. (3) we establish a large dataset for visual EoI extraction. Three real-world scenarios are selected to make the dataset much more practical and useful.

Ii Related Work

Typically, scene text reading [13, 5, 24, 14] falls into two categories, one stream is recognizing all the texts in the image, and the other one is merely recognizing the EoIs, which is also called entity extraction or structural information extraction.

The first stream of scene text reading usually contains two modules, scene text detection [9] and scene text recognition [13]. A text line is described as a rectangle or a quadrilateral, or even a mask region by using regression or segmentation methods [24, 22, 23]. After we obtain the location of texts, many recognition algorithms, such as CRNN [17] and attention-based methods [13], could be utilized to obtain the texts in the image. Recently, detection and recognition branches are merged into an end-to-end framework, and jointly trained simultaneously [18].

EoIs extraction is the second stream of scene text reading and is vital to real applications like credit card’s entities recognition. Classical approaches are based on rules and templates [16, 12], which firstly recognizes all the texts in the image by OCR methods, and then extracts EoIs with projective transformation, handcrafted strategies and many post-processes. Spatial connections, segmentation and layout analysis methods [7, 1, 2] were employed to extract structural information of EoIs. Gall et al. [3] created an embedding that merged spatial and linguistic features for extracting EoI information from images. However, the processing pipelines are redundant and complex so that rule adjustment should be very carefully. Most of those methods verified their algorithm on self-established datasets which are only for privately experiments. Thus, we construct a real-world scenario dataset.

In addition, many methods [21, 11] were presented to avoid using results of OCR technique. Word spotting based methods [4, 20] directly predicted both bounding boxes and a compact text representation of the words within them. D. He et al. [8] verified the existence of a certain text string in an image with spatial attention mechanism. Although these new methods outperform the traditional methods, the extracted words have no entity information. To solve this problem, we present an entity-aware attention network to extract EoIs for all the entities.

Iii Proposed Method

Fig. 1 shows the framework of EATEN. The CNN-based backbone aims to extract high-level visual features from images, and the entity-aware attention network learns entities layout of images automatically and decodes the content of predefined EoIs by entity-aware decoders. In addition, initial state warm-up and state transition are proposed to improve the performance of our method.

We choose Inception v3 [19] as the backbone. Feature maps after backbone are noted as , whose shape is , where is the number of channels. Let denote the set of entities, where is the number of entities. and respectively denote the entity-aware decoders and the time steps of each entity-aware decoder, where is the number of entity-aware decoders. Since one decoder is able to capture one or several EoIs, is not necessarily equal to . To build the semantic relations between the neighboring EoIs, we employ the last state of previous decoder to initialize the current decoder. We also use initial state warm-up to boost the performance of attention mechanism. Considering that every decoder is aware of its corresponding entity/entities, we call this network Entity-aware attention network

. EATEN has no explicit text recognition and uses the entity-aware decoder to decode the corresponding EoIs directly. No lexicon is used in this work.

Iii-a Entity-aware Decoder

In general, the decoders are arranged from left to right and top to bottom. We will assign multiple entities to a decoder if these entities always show up in the same text line, such as SS/TAN/DS in Fig. 5. The decoding process for a single entity is illustrated in Fig. 2.

Fig. 2: Entity-aware decoder. We use attention mechanism to obtain the context feature, denoted as , and then model the sequences of context feature with an LSTM. At last, we concatenate and as character feature, denoted as .

In each decoding step, the entity-aware decoder firstly uses entity-aware attention mechanism to obtain the corresponding context feature. The context feature, combined with previously predicted character, is further fed into an LSTM unit as input. And then the LSTM will update context feature and predict current character. Given a hidden state from the LSTM and feature maps , context feature is computed as


where is rearranged from , and its size is . is accomplished by a convolution. is elementwise multiplication and

is linear transformation.

, and are weight matrices to be learned.

The LSTM updates to , and its output is combined with an updated context feature to generate final predicted distribution over characters .



is the one-hot encoding of previous character.

, and are the weight matrices to be learned. is the predicted character of time step. The decoder index is omitted here for simplicity.

Iii-B Initial State Warm-up

In our observation, the vanilla entity-aware decoding model can not converge. The main problem lies in the first decoding step of the first entity-aware decoder. The first decoding step possibly generates identical character no matter what images are fed to EATEN. The decoder requires information from , , and to update context feature and predict a character in step . However, in the first step , , , and are initialized by constant values, only the updated is utilized to predict . It’s barely possible for the network to make a successful prediction from only. To make the first step prediction stable, a buffer decoding step is introduced to warmup the decoder. The buffer decoding step outputs a spacial character token WARMUP that will be discarded later, and the hidden state is computed as


As shown in Fig. 1, this hidden state is regarded as the initial state of the 1st entity-aware decoder, denoted as .

Fig. 3: Sample images: (a) The first row shows the real image of train ticket and the second row demonstrates the synthetic image of train ticket. (b) synthetic images of passport. (c) synthetic images of business card.

Iii-C State Transition

The decoders are independent of each other in our entity-aware attention network. In most of scenarios, the EoIs usually have strong semantic relations. Thus, we design state transition between neighboring decoders to establish the relations of the EoIs. As can be seen in Fig. 1, assuming that (, ) is the last state of . The initial state of its next decoder is expressed as . Therefore, the semantic information of all the previous decoders will be integrated into the current decoder. Experimental results show that state transition improves the performance obviously.

The model is trained by maximum likelihood estimation, and the overall loss is expressed as


Where is the loss for the -th decoder and is defined as the negative log likelihood.

Iv Dataset

Iv-a Synthetic Data Engine

The text information in many applications always contains various personal information, such as ID Card Number, Name, Home Addresses, etc., which must be erased before releasing to the public. Data synthesis is a way to bypass the privacy problem, and it also shows great help in scene text detection and scene text recognition [5, 10]. Following the success of synthetic word engine [10], we propose a more efficient text rendering pipeline.

Fig. 4: Synthetic image samples: (a) train ticket. (b) passport. (c) business card. From left to right: images after font rendering, transformation, and noising.

Fig. 4 illustrates the synthetic image generation processes and displays some synthetic images. The generation processes contain four steps:

  • Text preparing. To make the synthetic images more general, we collected a large corpus including Chinese name, address, etc. by crawling from the Internet.

  • Font rendering. We select one font for a specific scenario, and the EoIs are rendered on the background template images using an open image library. Especially, in the business card scenario, we prepared more than one hundred template images containing 85 simple background and pure images with random color to render text.

  • Transformation. We rotate the image randomly in a range of [-5, +5] degree, then resize the image according to the longer side. Elastic transformation is also employed.

  • Noise. Gaussian noise, blur, average blur, sharpen, brightness, hue, and saturation are applied.

The images are resized and padded to for train ticket and for passport. The template images of business card have already been fixed to so that we do not need any extra resizing and padding operations. The aspect ratio of text is preserved in these three scenarios, which is important for text recognition. The images and their corresponding labels of EoIs are saved, simultaneously. There’s no overlap between EoIs of the test set and EoIs of the training set.

Iv-B Benchmark Datasets

Train ticket. Sample images are shown in Fig. 3. As we can see, the top left corner is the real image of train ticket. The real images were shot in a finance department with inconsistent lighting conditions, orientations, background noise, imaging distortions, etc. Then we remove the private information, such as First Name, ID card number, QR code, and Seat Number. Meanwhile, synthetic images were also produced with the synthetic data engine. The train ticket dataset includes a total of 2k real images and 300k synthetic images, 0.4k real images of which are used for testing and the rest for training (90%) and validation (10%).

Passport. As shown in Fig. 3(b), we select one template image for passport, and erase the private information, such as Name, Passport Number, Face, on this image. The passport dataset includes a total 100k synthetic images, 2k images of which are used for testing and the rest for training and validation.

Business card. The synthetic images for business card are shown in Fig. 3(c), the positions of the EoIs are not constant and some EoIs may not exist, which is a challenge for extracting EoIs. The business card dataset includes a total 200k synthetic images, 2k images of which are used for testing and the rest for training and validation.

In contrast of train ticket, we only verify our approach in the synthetic dataset for passport and business card due to privacy concern.

V Experimental Results

V-a Implementation Details

We use SGD (stochastic gradient descent) optimizer with initial learning rate of 0.04, exponential learning rate decay factor of 0.94 for every 4 epoch and momentum of 0.9. To prevent overfitting, we regularize the model using weight decay 1e-5 in Inception v3 and character softmax layer, label smoothing with epsilon 0.1 and the values of LSTM Cell state clipping to [-10, 10]. In order to make the training process more stable, we clip the gradient by norm with 2.0. We use 8 Tesla P40 GPUs to train and 1 Tesla P40 GPU to test our model. Batch size for each dataset is different due to the different input image size, the number of entity-aware decoders and decoding steps of each decoder.

MethodDataset Only synth Only real Fused
270k synth 1.5k real 270k synth + 1.5k real
General OCR 79.2% 79.2% 79.2%
Attention OCR[21] 26.0% 68.1% 90.0%
EATEN w/o state 57.0% 55.4% 91.0%
EATEN 55.1% 86.2% 95.8%

Results on train ticket. We verify our method on three datasets including only synthetic data, only real data, and fused data. The evaluation metric is mEA.

V-B Experiment Setting

Evaluation Metrics. In train ticket and passport scenario, we define mean entity accuracy (mEA) to benchmark EATEN, which can be computed as


where is the predicted text and is the target text of the -th entity. is number of of entities and is the indicator function that return 1 if is equal to or 0 otherwise. In business card scenario, not all EoIs are guaranteed to appear, we define mean entity precision (mEP), mean entity recall (mER), and mean entity F-measure (mEF) to benchmark our task, which can be computed as


where is the number of non-null predicted entities, is the number of non-null target entities. When prediction and target are all null, indicator function returns 0. mEF is the harmonic average of mEP and mER.

MethodScenario Passport Business card
96k synth 196k synth
General OCR 11.1% 59.9% 60.5% 60.2%
Attention OCR[21] 81.6% 79.5% 79.2% 79.4%
EATEN w/o state 84.9% 89.7% 89.7% 89.7%
EATEN 90.8% 90.0% 89.6% 89.8%
Due to the entity words (Name, Passport No., etc.) are extremely
blurred, the matching rules were totally missed.
TABLE II: Results on passport and business car.

Entity-aware attention networks. We design different entity-aware attention networks for three scenarios. Generally speaking, we regard a group of EoIs that have strong semantic or position relations as a block, and use an entity-aware decoder to capture all the EoIs in this block. In train ticket scenario, we introduce five decoders to capture eight EoIs, the decoding steps, which refers to how many characters at most one decoder could generate of each decoder are set to 14, 20, 14, 12, 6 respectively. The decoding steps of each decoder are decided by the max number of characters of its corresponding EoIs. The EoIs that assigned to the decoders are Ticket Number (TCN), (Starting Station (SS), Train Number (TAN), Destination Station (DS)), Date (DT), (Ticket Rates (TR), Seat Category (SC)), and Name (NM). In passport scenario, we design five decoders to cover seven EoIs, decoding steps of each decoder are 25, 5, 15, 35, 35. The EoIs assigned to the decoders are Passport Number, Name, (Gender, Birth Date), Birth Place, (Issue Place, Expiry Date). Nine decoders are set to cover ten EoIs for business card, and decoding steps of each decoder are 21, 13, 21, 21, 21, 21, 32, 10, 21. The EoIs of each decoder are Telephone, Postcode, Mobile, URL, Email, FAX, Address, (Name, Title), Company. If an entity-aware decoder is responsible of capturing more than one EoIs, it generates an EOS token in the end of decoding each EoI. Different EoIs are separated by the EOS tokens.

Compared methods We compare several baseline methods with our approach: (1) General OCR. A typical paradigm, OCR and matching, that firstly detects and reads all the text by OCR engine111, and then extracts EoIs if the content of text fits predefined regular expressions or the position of text fits in designed templates. (2) Attention OCR[21]. It reads multiple lines of scene text by attention mechanism and has achieved state-of-the-art performance in several datasets. We adapt it to transcribe the EoIs sequentially, using EOS tokens to separate different EoIs. (3) EATEN without state transition. This method is for ablation study, to verify the efficiency of proposed state transition.

V-C Performance

We report our experimental results in this section. In train ticket scenario, as we can see from the 4th column of Table I, EATEN shows significant improvement over General OCR, Attention OCR, and EATEN w/o state (16.6%, 5.8%, and 4.8%).

The results of passport can be seen in the 2nd column of Table II, EATEN obtains a huge improvement over all other methods (especially General OCR, 90.8% v.s. 11.1%). Assuming that “entity words” are the texts that always appeared in the images, fixedly. We find that the blurred parts for passport are “entity words”, which are the exact words “Name” or “Passport No.” etc. Those entity words are used to locate corresponding EoIs content value in template matching. If entity words are missed, template matching will fail. However, since EATEN does not rely on entity words, it could generalize on these scenarios as long as the EoIs are recognizable. we can also see from the 5th column of Table II, EATEN significantly outperforms the General OCR by relative 29.6% in business card scenario.

Meanwhile, benefiting from the simplified overall pipeline, EATEN achieves high speed performance. As we can see in Table III, EATEN respectively achieves , , speedups over General OCR. Compared with Attention OCR, which also use attention mechanism, EATEN shows its advantage (the last two rows).

MethodScenario Train ticket Passport Business card
General OCR 1532ms 3000ms 1600ms
Attention OCR[21] 335ms 260ms 428ms
EATEN 221ms 242ms 357ms
TABLE III: Time costs of EATEN on three scenarios.

All experiments of the three scenarios also illustrate the generalization of EATEN. The layout of entities in train ticket and passport scenario is fixed, yet is inconstant for business card. The EoIs of business card can appear in arbitrary positions. Combining the performances of EATEN on three scenarios, we conclude that EATEN can not only cover real-world applications which have a fixed layout, but also applications whose layout is flexible. It can be used in many real-world applications including card recognition (e.g., ID card, driving license) and finance invoice recognition (quota invoice, air ticket). Considering that these results are released without any parameters tuning and post-process, there is room for improvement.

V-D Discussion

Synthetic data engine. We conduct some experiments to explore the impact of the synthetic data engine. As shown in the 2nd column of Table I, EATEN trained with images generated by our synthetic data engine can achieve 55.1% in mEA, which illustrates that synthetic images are capable of simulating real image to some degree. Considering the data distribution of the synthetic data has a large gap with the real data, it is reasonable that EATEN w/o state outperforms EATEN slightly (57.0% v.s. 55.1%), which can be considered as a regular fluctuation. We can also see from the 3rd column of Table I, the performance of EATEN trained on only real data is relatively poor compared with trained on fused data. Actually, the number of real data is too small so that the model may be overfitted. We add extra synthetic data to the real data and train a new model. As shown in the 4th column of Table I, the results show that model trained with fused train data outperforms model trained with only real data by a large margin (95.8% v.s. 86.2%, 91.0% v.s. 55.4%, 90.0% v.s. 68.1%). In summary, synthetic images is capable of simulating real-world images to some degree and makes contribution to the improvement of final performance.

Fig. 5: Examples of EoIs extraction: red font indicates EoI is wrongly predicted and black font indicates EoI is correctly predicted. (a) bad cases. (b) good cases. Entry names are labeled for better demonstration.

Examples. We analyze 0.4k real images to understand the weakness and advantage of EATEN. As shown in Fig. 5, The most common error is composed of three parts including challenging lighting conditions, unexpected disturbance, and serious text fusion, which are also challenging in traditional OCR. This problem can be alleviated by adding a denoising component to EATEN. On the other hand, we can also see some good cases from Fig. 5, EATEN can cover most of the texts with arbitrary shapes, projective/affine transformations, position drift without any correction. In some situations, such as text fusion and slight lighting, EATEN can also extract EoIs correctly, which shows its robustness in complex situations.

Vi conclusions

In this paper, we proposed an end-to-end framework called EATEN for extracting EoIs in images. A dataset with three real-world scenarios were established to verify the efficiency of the proposed method and to complement the research of EoI extraction. In contrast to traditional approaches based on text detection and text recognition, EATEN is efficiently trained without bounding box and full text annotations, and directly predicts target entities of an input image in one shot without any bells and whistles. It shows superior performance in all the scenarios, and shows full capacity of extracting EoIs from images with or without a fixed layout. This study provides a new perspective on text recognition, EoIs extraction, and structural information extraction.


  • [1] E. Aslan, T. Karakaya, E. Unver, and Y. Akgul (2016) A part based modeling approach for invoice parsing. In Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, pp. 390–397. Cited by: §II.
  • [2] E. Bart and P. Sarkar (2010) Information extraction by finding repeated structure. In International Workshop on Document Analysis Systems, Cited by: §II.
  • [3] R. Gal1, N. Morag1, and R. Shilkrot (2018) Visual-linguistic methods for receipt field recognition. In ACCV, pp. 2315–2324. Cited by: §II.
  • [4] L. Gómez, A. Mafla, M. Rusinol, and D. Karatzas (2018) Single shot scene text retrieval. In ECCV, pp. 700–715. Cited by: §I, §II.
  • [5] A. Gupta, A. Vedaldi, and A. Zisserman (2016) Synthetic data for text localisation in natural images. In CVPR, pp. 2315–2324. Cited by: §II, §IV-A.
  • [6] H. T. Ha, M. Medved’, Z. Nevěřilová, and A. Horák (2018) Recognition of ocr invoice metadata block types. In International Conference on Text, Speech, and Dialogue, pp. 304–312. Cited by: §I.
  • [7] H. Hamza, Y. Belaïd, and A. Belaïd (2007) Case-based reasoning for invoice analysis and recognition. In Case-Based Reasoning Research and Development, pp. 404–418. Cited by: §I, §II.
  • [8] D. He, Y. Li, A. Gorban, D. Heath, J. Ibarz, Q. Yu, D. Kifer, and C. L. Giles (2018) Guided attention for large scale scene text verification. arXiv preprint:1804.08588. Cited by: §II.
  • [9] H. Hu, C. Zhang, Y. Luo, Y. Wang, J. Han, and E. Ding (2017) Wordsup: exploiting word annotations for character based text detection. In ICCV, Cited by: §II.
  • [10] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman (2014) Synthetic data and artificial neural networks for natural scene text recognition. arXiv preprint:1406.2227. Cited by: §IV-A.
  • [11] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman (2016)

    Reading text in the wild with convolutional neural networks

    IJCV 116, pp. 1–12. Cited by: §II.
  • [12] B. Janssen, E. Saund, E. Bier, P. Wall, and M. A. Sprague (2012) Receipts2Go: the big world of small documents. In ACM symposium on Document engineering, pp. 121–124. Cited by: §II.
  • [13] H. Li, P. Wang, C. Shen, and G. Zhang (2018) Show, attend and read: a simple and strong baseline for irregular text recognition. CoRR abs/1811.00751. Cited by: §II, §II.
  • [14] J. Liu, C. Zhang, Y. Sun, J. Han, and E. Ding (2018) Detecting text in the wild with deep character embedding network. In ACCV, pp. 501–517. Cited by: §II.
  • [15] S. Milyaev, O. Barinova, T. Novikova, P. Kohli, and V. Lempitsky (2015)

    Fast and accurate scene text understanding with image binarization and off-the-shelf OCR

    IJDAR 18 (2), pp. 169–182. Cited by: §I.
  • [16] F. Schulz, M. Ebbecke, M. Gillmann, B. Adrian, S. Agne, and A. Dengel (2009) Seizing the treasure: transferring knowledge in invoice analysis. In ICDAR, pp. 848–852. Cited by: §I, §II.
  • [17] B. Shi, X. Bai, and C. Yao (2017) An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. PAMI 39 (11), pp. 2298–2304. Cited by: §II.
  • [18] W. Sui, Q. Zhang, J. Yang, and W. Chu (2018) A novel integrated framework for learning both text detection and recognition. In ICPR, pp. 2233–2238. Cited by: §II.
  • [19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In CVPR, pp. 2818–2826. Cited by: §III.
  • [20] T. Wilkinson, J. Lindstrom, and A. Brun (2017) Neural ctrl-f: segmentation-free query-by-string word spotting in handwritten manuscript collections. In ICCV, pp. 4433–4442. Cited by: §II.
  • [21] Z. Wojna, A. Gorban, D. Lee, K. Murphy, Q. Yu, Y. Li, and J. Ibarz (2017) Attention-based extraction of structured information from street view imagery. arXiv preprint:1704.03549. Cited by: §II, §V-B, TABLE I, TABLE II, TABLE III.
  • [22] Y. Xu, Y. Wang, W. Zhou, Y. Wang, Z. Yang, and X. Bai (2018) TextField: learning a deep direction field for irregular scene text detection. arXiv preprint:1812.01393. Cited by: §II.
  • [23] C. Zhang, B. Liang, Z. Huang, M. En, J. Han, E. Ding, and X. Ding (2019) Look more than once: an accurate detector for text of arbitrary shapes. arXiv preprint:1904.06535. Cited by: §II.
  • [24] X. Zhou, C. Yao, H. Wen, Y. Wang, S. Zhou, W. He, and J. Liang (2017) EAST: an efficient and accurate scene text detector. In CVPR, pp. 2642–2651. Cited by: §II, §II.