DeepAI
Log In Sign Up

TextCohesion: Detecting Text for Arbitrary Shapes

04/22/2019
by   Weijia Wu, et al.
0

In this paper, we propose a pixel-wise detector named TextCohesion for scene text detection especially for those with arbitrary shapes. TextChohesion splits a text instance into 5 key components: a Text Skeleton, and four Directional pixel Regions. These components are easy to handle rather than directly control the entire text instance. We also introduce a confidence scoring mechanism to filter out the characters that are similar to texts. Our method can integrate text contexts intensively even grasp clues when it is very complex background. Experiments on challenging benchmarks demonstrate that our TextCohesion clearly outperform state-of-the-art methods and it achieves an F-measure of 84.6 and 86.3 on Total-Text and SCUT-CTW1500 respectively.

READ FULL TEXT VIEW PDF

page 2

page 4

page 9

11/16/2019

SA-Text: Simple but Accurate Detector for Text of Arbitrary Shapes

We introduce a new framework for text detection named SA-Text meaning "S...
07/13/2021

CentripetalText: An Efficient Text Instance Representation for Scene Text Detection

Scene text detection remains a grand challenge due to the variation in t...
11/18/2019

Learning to Predict More Accurate Text Instances for Scene Text Detection

At present, multi-oriented text detection methods based on deep neural n...
06/07/2018

Shape Robust Text Detection with Progressive Scale Expansion Network

The challenges of shape robust text detection lie in two aspects: 1) mos...
03/01/2021

Detection and Rectification of Arbitrary Shaped Scene Texts by using Text Keypoints and Links

Detection and recognition of scene texts of arbitrary shapes remain a gr...
11/30/2020

BOTD: Bold Outline Text Detector

Recently, text detection for arbitrary shape has attracted more and more...
03/21/2019

Towards Robust Curve Text Detection with Conditional Spatial Expansion

It is challenging to detect curve texts due to their irregular shapes an...

1 Introduction

Detecting text in the wild is a fundamental computer vision task which directly determines the subsequent recognition results. Many applications in the real world depend on accurate text detection, such as photo translation

[Yi and Tian(2014)] and autonomous driving [Zhu et al.(2018)Zhu, Liao, Yang, and Liu]. Now, horizontal [Tian et al.(2016)Tian, Huang, He, He, and Qiao, Wu et al.(2017)Wu, Wang, Dai, Zhang, and Cao, Zhu et al.(2017)Zhu, Jiang, Yang, Wang, Li, Fu, Wang, and Luo] based methods no longer meet our requirements meanwhile multi-oriented [He et al.(2016)He, Zhang, Ren, and Sun, Liu and Jin(2017), Liao et al.(2017)Liao, Shi, Bai, Wang, and Liu, Zhou et al.(2017)Zhou, Yao, Wen, Wang, Zhou, He, and Liang, Hu et al.(2017)Hu, Zhang, Luo, Wang, Han, and Ding, Deng et al.(2018)Deng, Liu, Li, and Cai] or more flexible pixel-wise detectors [Long et al.(2018)Long, Ruan, Zhang, He, Wu, and Yao, Zhu and Du(2018), Li et al.(2018)Li, Wang, Hou, Liu, Lu, and Yang] have led to the mainstream. Although these methods are now good enough to be deployed in industrial products, to precisely locate text instances especially captured by mobile sensors is still a big challenge because of arbitrary angles, shapes, and complex backgrounds.

The first challenge is text instances with irregular shapes. Different from other common objects, the shape of a text instance cannot be accurately described by a horizontal box or an oriented quadrilateral. Some typical methods (e.g EAST [Zhou et al.(2017)Zhou, Yao, Wen, Wang, Zhou, He, and Liang], TextBox++, [Liao et al.(2018)Liao, Shi, and Bai]) perform well in common benchmarks (e.g ICDAR 2013 [Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, and De Las Heras] ICDAR 2015 [Karatzas et al.(2015)Karatzas, Gomez-Bigorda, Nicolaou, Ghosh, Bagdanov, Iwamura, Matas, Neumann, Chandrasekhar, Lu, et al.]) but these regression-based methods may not be gratified in curved text challenges (e.gTotal-Text [Ch’ng and Chan(2017)], SCUT-CTW1500 [Yuliang et al.(2017)Yuliang, Lianwen, Shuaitao, and Sheng]) as shown in Figure 1 (a).

The second challenge is to cut open text boundaries. Although pixel-wise methods do not suffer from a fixed shape, they may still fail to separate text areas with very adjacent edges, which shows in Figure 1 (b).

The third challenge is that text candidates may face the false positives dilemma[Xie et al.(2018)Xie, Zang, Shao, Yu, Yao, and Li], because of lacking context information. It can be seen that characters like the bottom of Figure 1 may confuse detectors. Some symbols or characters that are similar to text may be misclassified.

To overcome these challenges related above, we propose a novel method, called TextCohesion. As shown in Figure 2, our method treats a text instance as a combination of a Text Skeleton and four Directional Pixel Regions where the previous one roughly represents the shape, size while the latter is responsible for refining the text area and edges from four directions. Notably, a pixel belongs to more than one Directional Pixel Regions (e.g up, left), which means the pixel has more chances to be found. Further, the average confidence score of a Text Skeleton that is greater than a defined threshold (e.g 0.5) is considered as a candidate.

(a) (b) (c)
Figure 1: (a) Regression-based methods suffer from a fixed shape. (b) Pixel-based methods may not separate from the very adjacent boundaries. (c) TextChohesion. Pixel-based and Regression-based methods may face the false positive dilemma.
(a) (b) (c) (d)
Figure 2: (a) Text Skeleton. (b) Directional Pixel Region. (c) Confidence Scoring Mechanism. The average confidence score of TS is used to filter out false positives. (d) Prediction. Every text instance is constituted of TS and DPR.

The contributions of this paper contain three folds:

  • We propose a novel method TextCohesion with three helpers: Text Skeleton, Directional Pixel Region and Confidence Scoring Mechanism to make predictions, which outperforms state-of-the-art methods on curved text benchmarks.

  • The proposed method works well for all shapes of text.

  • The proposed method can further filter out symbols or characters that are similar to text.

2 Related Works

Now detectors are mainly based on deep neural networks. There are two main trends in the field of text detection: regression-based and pixel-based. Inspired by the promising of object detection architectures such as Faster R-CNN

[Ren et al.(2015)Ren, He, Girshick, and Sun] and SSD [Liu et al.(2016)Liu, Anguelov, Erhan, Szegedy, Reed, Fu, and Berg], a bunch of regression-based detectors were proposed, which simply regress the coordinates of bounding boxes of candidates as the final prediction. TextBoxes [Liao et al.(2017)Liao, Shi, Bai, Wang, and Liu] adopts SSD and adjusts the default box to relatively long shape to match text instances. By modifying Faster R-CNN, Rotation Region Proposal Networks [Ma et al.(2018)Ma, Shao, Ye, Wang, Wang, Zheng, and Xue] inserts the rotation branch to fit the oriented shapes of text in natural images. These methods can achieve satisfying performance on horizontal or multi-oriented text areas. However, they may suffer from the shape of the bounding box even with rotations. Mainstream pixel-wise methods drew inspirations from the fully convolutional network (FCN) [Long et al.(2015)Long, Shelhamer, and Darrell] which removes all fully-connected layers and is widely used to generate semantic segmentation map. Convolution transpose operation then helps the shirked feature restore its original size. TextSnake [Long et al.(2018)Long, Ruan, Zhang, He, Wu, and Yao] treats a text instance as a sequence of ordered, overlapping disks centred at symmetric ordered, each of which is associated with potentially variable radius and orientations. It made significant progress on curved text benchmarks. TexeField [Xu et al.(2019)Xu, Wang, Zhou, Wang, Yang, and Bai]

learns a direction field pointing away from the nearest text boundary to each text point. An image of two-dimensional vectors represents the direction field. SPCNET

[Xie et al.(2018)Xie, Zang, Shao, Yu, Yao, and Li], based on Feature Pyramid Network [Lin et al.(2017)Lin, Dollar, Girshick, He, Hariharan, and Belongie] and Mask R-CNN [He et al.(2017)He, Gkioxari, Dollár, and Girshick], inserts Text Context Module and Re-Score mechanism to leave the lack of context information clues and inaccurate classification score.

3 Methodology

This section presents details of the TextCohesion. First, we illustrate the feature extractor and how we use of features. Next, we describe the TS, DPR and Confidence Scoring Mechanism of this method. Finally, we manifest how to generate the corresponding label and training objectives.

Figure 3: The pipeline of TextCohesion. To detect irregular text instances, we predict Text Skeleton(TS) and Directional Pixel Regions(DPRs). The post-processing links TS and DPRs to reconstruct the text. All TS are verified by Confidence Scoring Mechanism.

3.1 Pipeline

We introduce Text Skeleton (TS) and Directional Pixel Region (DPR) to precisely capture text instances. For TS, we use a line linked by several dots (e.g15) to roughly represent the text instance. Then every DPR is split into several cells by dots of TS. The tangent value between two adjacent dots determines which corresponding cell falls into. Text Region (TR) is a mask that restricts the bounds of TS. Afterwards, the confidence scoring is applied to filter out false positives. Finally, the remaining TS, TR, and DPRs are combined to form the text. The whole process presents in Figure 3.

3.2 Feature Extractor

We chose VGG16 [Simonyan and Zisserman(2014)] as our feature extractor for a fair comparison. Inspired by FPN [Lin et al.(2017)Lin, Dollar, Girshick, He, Hariharan, and Belongie], lateral connections are also inserted to enrich features, and the feature extractor shows in Figure 4. At the first stage, images are downsampled to the multilevel features. Secondly, features are gradually upsampled to the original size and mixed with the corresponding output of the previous stage. Then several maps are generated to represent TS, DPR, and TR.

Figure 4: Feature Extractor

3.3 Text Skeleton

As shown in Figure 6 (a), we use TS to roughly represent text candidates. Specifically, the dots in TS are treated as a series of starting points to future searching the corresponding regions of interest. Moreover, TS is also used to filter out false positives. Compared with the entire text instance, TS is less confused by adjacent boundaries, easier to locate, and can represent the shape of the original text approximately. Therefore, we consider every TS as a candidate.

3.4 Directional Pixel Region

Pixels that in text instance but not in TS are categorized into four regions as shown in Figure 5.

(a) (b) (c) (d)
Figure 5: (a) up region (b) down region (c) left region (d) right region.

DPRs are used to segment the edges elaborately. Notably, overlapping happens in more than one region (e.g up, right). All information caught by four directions, in general, would be twice the original image approximately. A pixel would have a higher confidence score when it is ensured by more directions. It makes our method more robust, especially in complex background.

3.5 Confidence Scoring Mechanism

To filter out false positives, we consider values in TS as the probability of the actual text, instead of simple flags. They are calculated by the following equation:

(1)

where is the total number of pixels in TS and is the threshold value to filter out those TS with low confidence score. Note that the Confidence Scoring Masochism is easy to transfer on other detectors.

3.6 Label Generation

3.6.1 Label of Text Skeleton

TS is a line expanded by the average values of the sampled points of two longest edges in a text area, which represents condensed information of a text instance. The appearance of TS presents in Figure 6 (a). For a text instance represented by a group of vertexes in a certain order, we firstly pick some vertexes that would change the area considerably . Then linking the vertexes to get a connected graph and compute the lengths of all edges. The two shortest edges are discarded, and the remaining two boundaries are what we need. Finally, the line between the two boundaries is TS.

(a) (b)
Figure 6: (a) Text Skeleton is divided by some dots. (b) Directional Pixel Regions are filled by the tangent angle of dots.

3.6.2 Label of Directional Pixel Region

As shown in Figure 6 (b), pixels that in text instance but not in TS are categorized into 4 regions which are defined by the following equation:

(2)

where are dots of , is a coordinate of pixel and is the sum of pixels. In particular, a portion of dots with a tangent value near to 45 or 135 degrees is difficult to define its regions, so they are occupied by more than one regions (e.g top, left). Specifically, these dots with the tangent value among 30 to 60 and -30 to -60 are hard to define, so all of them are considered to belong to two adjacent regions.

3.7 Training Objectives

The proposed model is trained end-to-end, with the following loss functions as the objectives:

(3)

Where is set to 3.0 experimentally, and is a self-adjust cross entropy loss [Deng et al.(2018)Deng, Liu, Li, and Cai], because putting the same weight on all positive pixel is unfair, in which case the large instance contributes greater loss while the smaller one little. The total loss should treat all samples equally regardless of their size.

(4)

where is the sum of text instances in an image, represent the size of instance. are the predicted pixel belonging to TS and the corresponding ground truth respectively.

(5)

where represents the . represent the pixel falling into and its ground truth. We use Smooth L1 loss [Ren et al.(2015)Ren, He, Girshick, and Sun]

in case of outlier effects.

For , we also choose cross entropy loss. is responsible for penalizing false positives and is used to prevent points out of boundaries.

4 Experiments

In this section, we evaluate the proposed method on curved challenging benchmarks for scene text detection and compare it with other algorithms.

SynthText [Gupta et al.(2016)Gupta, Vedaldi, and Zisserman] is a large scale dataset that contains about 800K synthetic images. These images are created by blending natural images with text rendered with random fonts, sizes, colours, and orientations. Thus these images are quite realistic. We use this dataset to pre-train our model.

Total-Text [Ch’ng and Chan(2017)] is a word-level based English curve text dataset which is split into training and testing sets with 1255 and 300 images respectively.

SCUT-CTW1500 [Yuliang et al.(2017)Yuliang, Lianwen, Shuaitao, and Sheng] is another dataset mainly consisting of curved text. It consists of 1000 training images and 500 test images.

4.1 Data Augmentation

Images are randomly rotated, cropped and mirrored. After that, colour, and lightness are randomly adjusted. We ensure that the text on the augmented images are still legible if they are legible before augmentation.

4.2 Implementation Details

Our method is implemented in Pytorch

[Paszke et al.(2017)Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, and Lerer]

. The network is pre-trained on SynthText for two epoch and fine-tuned on other datasets. We adopt Adam optimizer as our learning rate scheme. During the pre-training phase, the learning rate is fixed to 0.001. During the fine-tuning stage, the learning rate is initially set to 0.0001 and decays with a rate of 0.94 every 10000 insertions. All the experiments are conducted on a regular workstation(CPU: Intel(R) Core(TM) i7-7800X CPU @ 3.50GHz; GPU: GTX 1080). We train our model with a batch of 4 on one GPU.

4.3 Experiment Results

Experiment on Total-Text Fine-tuning stops at 35 epochs. Thresholds for are set to 0.54, 0.2, 0.1 respectively. In training and testing, all images are resized to 512 * 512. We choose the following models for comparison which can be found in Table 1.

Method Precision Recall F-measure
DeconvNet [Neumann and Matas(2010)] 33.0 40.0 36.0
EAST [Zhou et al.(2017)Zhou, Yao, Wen, Wang, Zhou, He, and Liang] 50.0 36.2 42.0
TextSnake [Long et al.(2018)Long, Ruan, Zhang, He, Wu, and Yao] 82.7 74.5 78.4
TextField [Xu et al.(2019)Xu, Wang, Zhou, Wang, Yang, and Bai] 79.9 81.2 80.6
CSE [Liu et al.(2019)Liu, Lin, Yang, Liu, Lin, and Goh] 81.4 79.7 80.2
PSENet-1s [Li et al.(2018)Li, Wang, Hou, Liu, Lu, and Yang] 84.02 77.96 80.87
SPCNET[Xie et al.(2018)Xie, Zang, Shao, Yu, Yao, and Li] 82.8 83.0 82.9
CRAFT [Baek et al.(2019)Baek, Lee, Han, Yun, and Lee] 87.6 79.9 83.6
Ours 88.1 81.4 84.6
Table 1: Experimental results on Total-Text.

On SCUT-CTW1500, we use the same settings as Total-Text except for the values of is set to 0.29. The comparison can be found in Table 2.

Method Precision Recall F-measure
SegLInk [Shi et al.(2017)Shi, Bai, and Belongie] 42.3 40.0 40.8
EAST [Zhou et al.(2017)Zhou, Yao, Wen, Wang, Zhou, He, and Liang] 78.7 49.1 60.4
DPMNet [Liu and Jin(2017)] 69.9 56.0 62.2
CTD [Noh et al.(2015)Noh, Hong, and Han] 74.3 65.2 69.5
CTD+TLOC [Noh et al.(2015)Noh, Hong, and Han] 77.4 69.8 73.4
TextSnake [Long et al.(2018)Long, Ruan, Zhang, He, Wu, and Yao] 67.9 85.3 75.6
PSENet-1s [Li et al.(2018)Li, Wang, Hou, Liu, Lu, and Yang] 84.8 79.7 82.2
TextMountain [Zhu and Du(2018)] 82.9 83.4 83.2
PAN MASK R-CNN [Huang et al.(2019)Huang, Zhong, Sun, and Huo] 86.8 83.2 85.0
Ours 88.0 84.7 86.3
Table 2: Experimental results on SCUT-CTW1500.

Our method achieves the highest F-measure in these benchmarks. It turns out that TS may be an appropriate representation rather than dealing directly with the entire text instance. After candidates selection, a text instance is initialized as its corresponding TS, and then gradually diffuses outward along the DPRs belonging to this TS. In this process, pixels belonging to a specific DPR will be firstly searched in that direction (e.gThe up region will be first searched up along TS), and then there will be other chances to be supplement from different searching paths (e.gThe up region will be supplemented by searching the left and right regions). In other words, the direction of a pixel is not unique, and a text instance will have multiple opportunities to recover completely.

4.4 Transferable Confidence Scoring Mechanism

To test the effectiveness of Confidence Scoring Mechanism, we implement it on SCUT-CTW1500.

Method Precision Recall F-measure
TextCohesion with no scoring mechanism 82.3 84.9 83.5
TextCohesion with scoring mechanism 88.0 84.7 86.3
Table 3: Effectiveness of Confidence Scoring Mechanism on SCUT-CTW1500.

In Table 3, the precision is improved significantly when Confidence Scoring Mechanism is inserted. We argue that this might help other detectors to resolve the precision related problems.

4.5 Conclusion and Future Work

In this paper, we introduce a novel text detector that outperforms state-of-the-art methods on curved text benchmarks (Figure 7). The novel proposed Directional Pixel Region and Text Skeleton with a scoring mechanism may be the key factors to make it sensible on text with arbitrary shapes, dedicated boundaries, and false positive dilemma. In future research, we would explore an end-to-end detection with the recognition system for text with arbitrary shapes.

(a)
(b)
Figure 7: (a) Total-Text result (b)SCUT-CTW1500 result

References

  • [Baek et al.(2019)Baek, Lee, Han, Yun, and Lee] Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, and Hwalsuk Lee. Character region awareness for text detection. arXiv preprint arXiv:1904.01941, 2019.
  • [Ch’ng and Chan(2017)] Chee Kheng Ch’ng and Chee Seng Chan. Total-text: A comprehensive dataset for scene text detection and recognition. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 1, pages 935–942. IEEE, 2017.
  • [Coates et al.(2011)Coates, Carpenter, Case, Satheesh, Suresh, Wang, Wu, and Ng] Adam Coates, Blake Carpenter, Carl Case, Sanjeev Satheesh, Bipin Suresh, Tao Wang, David J Wu, and Andrew Y Ng. Text detection and character recognition in scene images with unsupervised feature learning. In ICDAR, volume 11, pages 440–445, 2011.
  • [Deng et al.(2018)Deng, Liu, Li, and Cai] Dan Deng, Haifeng Liu, Xuelong Li, and Deng Cai. Pixellink: Detecting scene text via instance segmentation. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    , 2018.
  • [Epshtein et al.(2010)Epshtein, Ofek, and Wexler] Boris Epshtein, Eyal Ofek, and Yonatan Wexler. Detecting text in natural scenes with stroke width transform. In

    2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    , pages 2963–2970. IEEE, 2010.
  • [Gupta et al.(2016)Gupta, Vedaldi, and Zisserman] Ankush Gupta, Andrea Vedaldi, and Andrew Zisserman. Synthetic data for text localisation in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2315–2324, 2016.
  • [He et al.(2016)He, Zhang, Ren, and Sun] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • [He et al.(2017)He, Gkioxari, Dollár, and Girshick] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
  • [Hu et al.(2017)Hu, Zhang, Luo, Wang, Han, and Ding] Han Hu, Chengquan Zhang, Yuxuan Luo, Yuzhuo Wang, Junyu Han, and Errui Ding. Wordsup: Exploiting word annotations for character based text detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 4940–4949, 2017.
  • [Huang et al.(2013)Huang, Lin, Yang, and Wang] Weilin Huang, Zhe Lin, Jianchao Yang, and Jue Wang. Text localization in natural images using stroke feature transform and text covariance descriptors. In Proceedings of the IEEE International Conference on Computer Vision, pages 1241–1248, 2013.
  • [Huang et al.(2019)Huang, Zhong, Sun, and Huo] Zhida Huang, Zhuoyao Zhong, Lei Sun, and Qiang Huo. Mask r-cnn with pyramid attention network for scene text detection. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 764–772. IEEE, 2019.
  • [Jain and Yu(1998)] Anil K Jain and Bin Yu. Automatic text location in images and video frames. Pattern recognition, 31(12):2055–2076, 1998.
  • [Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, and De Las Heras] Dimosthenis Karatzas, Faisal Shafait, Seiichi Uchida, Masakazu Iwamura, Lluis Gomez i Bigorda, Sergi Robles Mestre, Joan Mas, David Fernandez Mota, Jon Almazan Almazan, and Lluis Pere De Las Heras. Icdar 2013 robust reading competition. In 2013 12th International Conference on Document Analysis and Recognition, pages 1484–1493. IEEE, 2013.
  • [Karatzas et al.(2015)Karatzas, Gomez-Bigorda, Nicolaou, Ghosh, Bagdanov, Iwamura, Matas, Neumann, Chandrasekhar, Lu, et al.] Dimosthenis Karatzas, Lluis Gomez-Bigorda, Anguelos Nicolaou, Suman Ghosh, Andrew Bagdanov, Masakazu Iwamura, Jiri Matas, Lukas Neumann, Vijay Ramaseshan Chandrasekhar, Shijian Lu, et al. Icdar 2015 competition on robust reading. In 2015 13th International Conference on Document Analysis and Recognition (ICDAR), pages 1156–1160. IEEE, 2015.
  • [Lee et al.(2011)Lee, Lee, Lee, Yuille, and Koch] Jung-Jin Lee, Pyoung-Hean Lee, Seong-Whan Lee, Alan Yuille, and Christof Koch. Adaboost for text detection in natural scene. In 2011 International Conference on Document Analysis and Recognition, pages 429–434. IEEE, 2011.
  • [Li et al.(2018)Li, Wang, Hou, Liu, Lu, and Yang] Xiang Li, Wenhai Wang, Wenbo Hou, Ruo-Ze Liu, Tong Lu, and Jian Yang. Shape robust text detection with progressive scale expansion network. arXiv preprint arXiv:1806.02559, 2018.
  • [Liao et al.(2017)Liao, Shi, Bai, Wang, and Liu] Minghui Liao, Baoguang Shi, Xiang Bai, Xinggang Wang, and Wenyu Liu. Textboxes: A fast text detector with a single deep neural network. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
  • [Liao et al.(2018)Liao, Shi, and Bai] Minghui Liao, Baoguang Shi, and Xiang Bai. Textboxes++: A single-shot oriented scene text detector. IEEE transactions on image processing, 27(8):3676–3690, 2018.
  • [Lin et al.(2017)Lin, Dollar, Girshick, He, Hariharan, and Belongie] Tsung Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In IEEE Conference on Computer Vision Pattern Recognition, 2017.
  • [Liu et al.(2016)Liu, Anguelov, Erhan, Szegedy, Reed, Fu, and Berg] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
  • [Liu and Jin(2017)] Yuliang Liu and Lianwen Jin. Deep matching prior network: Toward tighter multi-oriented text detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1962–1969, 2017.
  • [Liu et al.(2019)Liu, Lin, Yang, Liu, Lin, and Goh] Zichuan Liu, Guosheng Lin, Sheng Yang, Fayao Liu, Weisi Lin, and Wang Ling Goh. Towards robust curve text detection with conditional spatial expansion. 2019.
  • [Long et al.(2015)Long, Shelhamer, and Darrell] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015.
  • [Long et al.(2018)Long, Ruan, Zhang, He, Wu, and Yao] Shangbang Long, Jiaqiang Ruan, Wenjie Zhang, Xin He, Wenhao Wu, and Cong Yao. Textsnake: A flexible representation for detecting text of arbitrary shapes. In Proceedings of the European Conference on Computer Vision (ECCV), pages 20–36, 2018.
  • [Ma et al.(2018)Ma, Shao, Ye, Wang, Wang, Zheng, and Xue] Jianqi Ma, Weiyuan Shao, Hao Ye, Li Wang, Hong Wang, Yingbin Zheng, and Xiangyang Xue. Arbitrary-oriented scene text detection via rotation proposals. IEEE Transactions on Multimedia, 20(11):3111–3122, 2018.
  • [Neumann and Matas(2010)] Lukas Neumann and Jiri Matas. A method for text localization and recognition in real-world images. In Asian Conference on Computer Vision, pages 770–783. Springer, 2010.
  • [Noh et al.(2015)Noh, Hong, and Han] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In IEEE International Conference on Computer Vision, 2015.
  • [Paszke et al.(2017)Paszke, Gross, Chintala, Chanan, Yang, DeVito, Lin, Desmaison, Antiga, and Lerer] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
  • [Ren et al.(2015)Ren, He, Girshick, and Sun] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: towards real-time object detection with region proposal networks. 2015.
  • [Shi et al.(2017)Shi, Bai, and Belongie] Baoguang Shi, Xiang Bai, and Serge Belongie. Detecting oriented text in natural images by linking segments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2550–2558, 2017.
  • [Simonyan and Zisserman(2014)] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • [Tian et al.(2016)Tian, Huang, He, He, and Qiao] Zhi Tian, Weilin Huang, Tong He, Pan He, and Yu Qiao. Detecting text in natural image with connectionist text proposal network. In European conference on computer vision, pages 56–72. Springer, 2016.
  • [Wang et al.(2011)Wang, Babenko, and Belongie] Kai Wang, Boris Babenko, and Serge Belongie. End-to-end scene text recognition. In 2011 International Conference on Computer Vision, pages 1457–1464. IEEE, 2011.
  • [Wang et al.(2012)Wang, Wu, Coates, and Ng] Tao Wang, David J Wu, Adam Coates, and Andrew Y Ng.

    End-to-end text recognition with convolutional neural networks.

    In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), pages 3304–3308. IEEE, 2012.
  • [Wu et al.(2017)Wu, Wang, Dai, Zhang, and Cao] Dao Wu, Rui Wang, Pengwen Dai, Yueying Zhang, and Xiaochun Cao. Deep strip-based network with cascade learning for scene text localization. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 1, pages 826–831. IEEE, 2017.
  • [Xie et al.(2018)Xie, Zang, Shao, Yu, Yao, and Li] Enze Xie, Yuhang Zang, Shuai Shao, Gang Yu, Cong Yao, and Guangyao Li. Scene text detection with supervised pyramid context network. arXiv preprint arXiv:1811.08605, 2018.
  • [Xu et al.(2019)Xu, Wang, Zhou, Wang, Yang, and Bai] Yongchao Xu, Yukang Wang, Wei Zhou, Yongpan Wang, Zhibo Yang, and Xiang Bai. Textfield: Learning a deep direction field for irregular scene text detection. IEEE Transactions on Image Processing, 2019.
  • [Yao et al.(2012)Yao, Bai, Liu, Ma, and Tu] Cong Yao, Xiang Bai, Wenyu Liu, Yi Ma, and Zhuowen Tu. Detecting texts of arbitrary orientations in natural images. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1083–1090. IEEE, 2012.
  • [Yi and Tian(2011)] Chucai Yi and YingLi Tian. Text string detection from natural scenes by structure-based partition and grouping. IEEE Transactions on Image Processing, 20(9):2594–2605, 2011.
  • [Yi and Tian(2014)] Chucai Yi and Yingli Tian. Scene text recognition in mobile applications by character descriptor and structure configuration. IEEE transactions on image processing, 23(7):2972–2982, 2014.
  • [Yin et al.(2014)Yin, Yin, Huang, and Hao] Xu-Cheng Yin, Xuwang Yin, Kaizhu Huang, and Hong-Wei Hao. Robust text detection in natural scene images. IEEE transactions on pattern analysis and machine intelligence, 36(5):970–983, 2014.
  • [Yuliang et al.(2017)Yuliang, Lianwen, Shuaitao, and Sheng] Liu Yuliang, Jin Lianwen, Zhang Shuaitao, and Zhang Sheng. Detecting curve text in the wild: New dataset and new solution. arXiv preprint arXiv:1712.02170, 2017.
  • [Zhou et al.(2017)Zhou, Yao, Wen, Wang, Zhou, He, and Liang] Xinyu Zhou, Cong Yao, He Wen, Yuzhi Wang, Shuchang Zhou, Weiran He, and Jiajun Liang. East: an efficient and accurate scene text detector. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 5551–5560, 2017.
  • [Zhu et al.(2017)Zhu, Jiang, Yang, Wang, Li, Fu, Wang, and Luo] Xiangyu Zhu, Yingying Jiang, Shuli Yang, Xiaobing Wang, Wei Li, Pei Fu, Hua Wang, and Zhenbo Luo. Deep residual text detection network for scene text. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), volume 1, pages 807–812. IEEE, 2017.
  • [Zhu et al.(2018)Zhu, Liao, Yang, and Liu] Yingying Zhu, Minghui Liao, Mingkun Yang, and Wenyu Liu. Cascaded segmentation-detection networks for text-based traffic sign detection. IEEE transactions on intelligent transportation systems, 19(1):209–219, 2018.
  • [Zhu and Du(2018)] Yixing Zhu and Jun Du. Textmountain: Accurate scene text detection via instance segmentation. arXiv preprint arXiv:1811.12786, 2018.