Log In Sign Up

TextTubes for Detecting Curved Text in the Wild

by   Joël Seytre, et al.

We present a detector for curved text in natural images. We model scene text instances as tubes around their medial axes and introduce a parametrization-invariant loss function. We train a two-stage curved text detector, and evaluate it on the curved text benchmarks CTW-1500 and Total-Text. Our approach achieves state-of-the-art results or improves upon them, notably for CTW-1500 by over 8 percentage points in F-score.


page 1

page 2

page 4

page 6

page 8


SA-Text: Simple but Accurate Detector for Text of Arbitrary Shapes

We introduce a new framework for text detection named SA-Text meaning "S...

TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes

Driven by deep neural networks and large scale datasets, scene text dete...

MOST: A Multi-Oriented Scene Text Detector with Localization Refinement

Over the past few years, the field of scene text detection has progresse...

WordFence: Text Detection in Natural Images with Border Awareness

In recent years, text recognition has achieved remarkable success in rec...

DGST : Discriminator Guided Scene Text detector

Scene text detection task has attracted considerable attention in comput...

RFBTD: RFB Text Detector

Text detection plays a critical role in the whole procedure of textual i...

Automatic Text Area Segmentation in Natural Images

We present a hierarchical method for segmenting text areas in natural im...

1 Introduction

Detecting and reading text in natural images (also referred to as scene text or text in the wild

) has been a central problem in scene understanding with applications ranging from helping visually impaired people navigate city scenes to product search and retrieval, and instant translation.

Scene text is typically broken down into two successive tasks: (1) text detection attempts to localize characters, words or lines, and (2) text recognition aims to transcribe their content. Hence, successful text extraction and transcription critically depends on obtaining accurate text localization, which is the main focus of this work.

Further, text in the wild is affected by large nuisance variability: for example, street signs are deformed by perspective transforms and change of viewpoint, and logos tend to be of arbitrary shapes and fonts. All this makes it difficult to extract proper detections to feed to the transcription stage of the text processing pipeline. As Figure 1 shows, simple quadrilateral frames do not properly capture the text, and in the presence of curved text, the bounding boxes of multiple text instances may overlap. This affects proper retrieval and parsing of the text, since the detection will yield multiple, irregular instances in a frame that should be normalized.

Figure 1: Inferences using our model and curved text detector TextTubes. Real-life objects often contain nested and curved text, which would be incorrectly retrieved by methods with quadrilateral outputs. (a) Original image [11] is inspired by Apollinaire’s Calligrammes [1]. (b) is from CTW-1500’s test set.

In this work, we (1) propose a ”tube” parametrization of the text reference frame, which can capture most of the nuisance variability through a curved medial axis, which is parametrized by a polygonal chain, alongside a fixed radius for the tube around the medial axis. We then (2) formulate a parametrization-invariant loss function that allows us to train a region-proposal network to detect scene text instances and regress such tubes, while addressing the ambiguous parametrization of the ground-truth polygons. Finally, we (3) achieve state-of-the-art performance on Total-Text and outperform current solutions on CTW-1500 by over eight points in F-score.

(a) Original
(b) Axis-aligned rectangle
(c) Quadrilateral
(d) Tube
Figure 2: Comparing different text representations. (a) shows the original image, (b) and (c) show that rectangles and quadrilaterals generate overlap and often capture as much background noise as text information, while containing several instances in a given box. In (d) the ground truth polygon is in green, and the associated medial axis is in magenta and the arrows represent the radiuses of the tubes.

2 Related work

Early approaches to scene text detection aim to recover single characters in images using Maximally Stable Extremal Regions (MSER) or connected component methods, and then group them into text instances [47, 45, 13, 46]

. False positives are then pruned using AdaBoost classifiers.

More recently, the advent of deep learning has enabled the training of high-accuracy deep networks for text detection, which fall in the following three main categories.

One early approach is to use semantic segmentation to detect text regions and group nearby pixels into full text instances. These methods [5, 50, 49, 42, 8, 6, 48, 43, 24] are generally based on Fully Convolutional Networks (FCN) [27] or U-Net [36]. The current best-performing method [28]

on the curved text benchmarks CTW-1500 and Total-Text uses a U-Net to segment the text center line, local radius and orientation of the image’s text regions, then reconstitutes curved text instances through a striding algorithm.

The second approach is to use a single-stage detector [10, 25, 37, 39, 38, 12], such as SSD [23] or YOLO [33, 34], where the network outputs both the object region and the associated class. Liao [19, 20] regress a quadrilateral output from a bounding box proposal, and Bušta [3] achieve text detection and recognition in a single pass.

A third approach is to use two-stage detectors [26, 15, 31, 21, 14, 51, 44] such as Faster R-CNN [35] or more recently Mask R-CNN [9]. For example, Sliding Line Point Regression (SLPR) [52] regresses polygonal vertices off a Faster R-CNN output while TextSpotter [30] uses Mask R-CNN to derive character-level masks to achieve end-to-end text recognition. Our approach features a region-proposal stage and falls in this category.

The main challenge of single-stage and two-stage detectors is to successfully separate instances within each region proposals from the Region Proposal Network (RPN), while the key problem for segmentation-based techniques is to accurately separate text instances that might be segmented together during the bottom-up process.

The application of computer vision solutions to a wider set of compelling natural scenes has encouraged the creation of more accurate text datasets: scene text benchmarks have transitioned from axis-aligned rectangles

[29, 17, 40] to quadrilaterals [16, 32] and most recently to curved text with polygonal ground-truth information [26, 4].

These new curved text benchmarks illustrate the necessity of developing new solutions for curved text. For example, Liu [26] report that the popular method EAST [50], which was the first to achieve F-score on the quadrilateral dataset ICDAR 2015, sees a drop in performance from to F-score when comparing performance on the straight and curved text subsets of CTW-1500.

Parametrizing a text instance through its contour vertices is often ambiguous and noisy. On the other hand, an alternative like pixel-level segmentation does not exploit domain knowledge, such as the fact that text is most often distributed along a curve with approximately constant height. To address this, we parametrize text instances as a constant-radius envelope around a text instance’s medial axis.

Our proposed method is implemented by generating region proposals as a first step before regressing the region’s main text instance’s medial axis and associated tube through a parametrization-invariant loss function computed between predicted and ground truth medial axes. This makes it possible for our approach to avoid the noisy and error-prone bottom-up step of grouping pixels into full text instances and ensures correct separation between different instances.

3 Tube Parametrization

Our approach represents text instances as follows: we model a tubular region of space through its curved medial axis and a fixed radius along the curve, and design a parametrization-invariant loss which can be used to train a neural network to regress such tubes.

3.1 Modeling a text instance

The envelope of a curved text instance may be described as a polygon with vertices . This parametrization is the natural extension of quadrilaterals (special case ) into the more diverse and complex shapes featured in recent datasets [26, 4]. Any given 2-dimensional compact shape can be approximated as a polygon: this increase in precision is just an intermediate step between simple axis-aligned rectangles and full text segmentation, where the number of polygon vertices is equal to the number of pixels of the shape contour.

Thus, we describe text instances as follows: a curved medial axis and its radius (Fig. 2(d)). While tubes are less generic than polygons, they better exploit the tubular shape of text instances. Notably, this parametrization can be non-unique as the position and number of vertexes can change without affecting the text instance. As described in a subsequent section, our formulation accounts for this variability.

Medial axis:

The medial axis (sometimes also referred to as the skeleton) of a polygonal text instance is defined as the set of centers of all maximal disks in its polygonal contour [41]. A disk is said to be maximal in an arbitrary shape if is is included in and any strictly larger disk containing would not be included in .

In any given script, text is usually a concatenation of characters of similar size. It follows that a text instance’s medial axes should be exclusively non-intersecting polygonal chains, a connected series of line segments, and we compute them as such. We also extend the two end segments of the polygonal chains until they join the polygon, as shown in Figure 2. The ground-truth medial axis of a polygonal text instance with vertices is thus described by points that form the segments of the medial axis when ordered properly. In the rest of this paper we will refer to these vertices along the medial axis as the medial points of the text instance. This is done in order to differentiate them from generic keypoints (see also Section 5.6).


as illustrated on Fig 2(d) we define the radius of a given text instance as the average disk radius over the set of all maximal disks in the polygonal text region. In our model, we make the approximation that a given tube’s radius is constant. As shown in Fig 5 this is a reasonable assumption as roughly 4 out of 5 instances from both CTW-1500 and Total-Text have less than a variation in radius along their medial axis, which results in a Intersection-over-Union of more than between a fixed-radius tube model and varying-radius tube.

3.2 Training loss function

Multi-task loss:

Similarly to the original approach of He [9] we train the network using the multi-task loss


where is the cross-entropy loss between predicted and ground truth class (text or no text) which is determined based on the Intersection-over-Union (IoU) to the ground truth bounding boxes in the image, and is an loss between the coordinates of the predicted bounding box and those of the rectangular, axis-aligned ground truth box encompassing the polygonal text instance.

Figure 3:

TextTubes’s architecture is built on top of the Mask R-CNN model, where we replace the upsampling of the mask head with the direct regression of the medial points and the text radius through 2 fully connected layers of 256 neurons.

Tube loss:

The purpose of our proposed loss function is to align predicted and ground truth tubes’ medial axes. The network outputs points that can be mapped to a curve parametrization, which is at the heart of the model. We avoid naively introducing a regression loss on each individual medial keypoint, since the precise choice of medial points is ambiguous and not unique, and the network may be unfairly penalized for finding equivalent parametrizations. This leads to unstable training and worse overall performance (see Section 5.6). Rather, we design a parametrization invariant loss, which accounts for both proximity of the regressed medial axis to the ground-truth, and proximity of their gradients (similar to a Sobolev norm).

Let be the output of the network and be the ground truth medial axis, where and are the number of medial points of the output and of the ground truth medial axis respectively. The loss is defined as


is a standard loss function between each instance’s average ground truth radius and predicted radius.

is also an loss for the initial and final medial points. While intermediate medial points are not unique due to reparametrization invariance, the endpoints of the tube are not ambiguous, and should be regressed accurately.

The remainder of the medial points only serve as support for the underlying tube and their precise location does not affect the parametrization invariant loss . To ensure that medial points capture the overall instance medial axis and avoid having medial points collapse together, we define a repulsive elastic loss between successive medial points.

where is the length of the smallest segment of the medial axis i.e the distance between its two closest medial points and where denotes the total length of the ground truth medial axis.

Line loss

We formulate as follows:


where are arc-length parametrizations of predicted and ground truth polygonal chains and , and


where measures the overall proximity between the two medial axes, and measures the similarity between directions. Here, measures the distance from the point to the curve , and and are the angles of the prediction and ground truth tangents respectively at point and at its closest point such that . In practice, we approximate this loss function numerically by sampling points uniformly across and .

The loss function introduces several hyper-parameters. We choose to sample 100 points uniformly along the medial axis to compute distances and tangent differences. The parameter interpolates between giving full weight to the distance between curves (), or to the distance between their derivatives (). We found to give the best performance, and and are set to the standard variations of the Gaussian kernels. The latter two are chosen based on the scale of the inputs to the Gaussian kernels and . In addition to these hyper-parameters, the loss terms in Eq. 1 and 2 can be assigned individual weights: we cross-validate them to be equal. In order to have a fixed size output from our model we fix the number of medial points to a hyper-parameter . We found that TextTubes’s performance is independant of the number of medial points regressed, when using at least medial points, as the medial axis can only consist of segments or less if there are 3 or less medial points. More specifically, the difference in performance between 4, 5, 7, 10 and 15 medial points is less than 0.5 percentage points in F-score. In the rest of this study we take .

(a) CTW-1500
(b) Total-Text
Figure 4: Comparison of ground truth bounding polygons: these images show the difference between CTW-1500’s line-level information and Total-Text’s word-level bounding polygons.


(a) Details of the image and instance count.
(b) CTW-1500: curvature
(c) CTW-1500: radius variation
(d) Total-Text: curvature
(e) Total-Text: radius variation
Figure 5: Dataset information. (a) breaks down the number of images and instance. (b) and (d) are histograms on the maximal angle difference between medial axis segments whereas (c) and (e) describe the relative radius variation along a text instance. Notable difference: CTW-1500 has instances that curve more than Total-Text. This is due to the text instance length difference (see Section 4.1 and Fig 4).

3.3 Computing a tube using a deep network

As described in Figure 3, we build on top of Mask R-CNN’s [9]: the input image goes through a Region-Proposal Network (RPN) from which Regions of Interest (RoI) features are computed using a Feature Pyramid Network (FPN) [22]. Those RoI features go through the Faster R-CNN detector [35] to determine the class and bounding box of the instance. In parallel, the RoI features are also fed to our newly introduced tube branch, which consists of a Fully Convolutional Network (FCN) [27] on top of which the coordinates of the medial points as well as the radius of the text instance are regressed. Implementation details are given in Section 5.1. All the points along the medial axes are linear interpolations of the vertexes output from the network, and therefore can be sampled while retaining differentiability w.r.t. the network output.

4 Datasets and evaluation

4.1 Curved text datasets


is a dataset composed of 1,500 images that were collected from natural scenes, websites and image libraries like OpenImages [18]. It contains over 10,000 text instances with at least one curved instance per image. The ground truth polygons are annotated in the form of 14-vertex polygons with line-level information.


is a dataset composed of 1,255 train images and 300 test images that can contain curved text instances. The ground truth information is in the form of word-level polygons with a varying number of vertices, from 4 to 18.

Curved and straight subsets

To address the importance of our curved text contribution, we distinguish whether individual text instances are curved or straight and form a curve and a straight subset of each dataset. Total-Text distinguishes curved instances in its annotations without specifying how, whereas CTW-1500 does not provide this information. Thus we apply our own criteria to both datasets.

We determine whether or not an instance is curved based on its medial axis: it is curved if any two distinct segments vary in angle by more than radian (). This yields roughly half of curved instances for CTW-1500 and one third for Total-Text, as seen in Fig 5(a). For more details we refer the reader to the original datasets paper [26, 4].

4.2 Evaluation protocol

We base our evaluations on the polygonal PASCAL VOC protocol [7] at Intersection-over-Union (IoU) threshold, as made publicly available 111 by Lyu [26].

After ranking polygonal predictions based on their confidence score, predictions are true positives if their IoU with a ground-truth polygon is greater than . Any ground-truth instance can only be matched once. The Precision-Recall (PR) curves associated to TextTubes are shown on Figure 7

. From such PR curves we can extract the precision and recall corresponding to the maximum F-score.

Additional Text Images CTW-1500 Total-Text
Method Used For Training P (%) R (%) F (%) P (%) R (%) F (%)
Tian (2016) [39] - - -
Shi (2017) [37]
Zhou (2017) [50]
Liu (2017) [26] 77.4 69.8 73.4 - - -
Ch’ng (2017) [4] K (COCO) - - - 33 40 36
Zhu (2018) [52] 80.1 70.1 74.8 - - -
Lyu (2018) [30] K (synth. + ICDAR13/15) - - - 69.0 55.0 61.3
Long (2018) [28] K (synth.) 67.9 85.3 75.6 82.7 74.5 78.4
78.9 73.7 76.2 - - -
TextTubes (no pre-training) 83.54 79.00 81.21 77.56 73.00 75.21
TextTubes K 87.65 80.00 83.65 84.25 73.50 78.51
Table 1: Comparing Precision, Recall and F-score on CTW-1500 and Total-Text, evaluated at Intersection-over-Union (IoU) .
                                                       Results for quadrilateral methods on CTW-1500 are taken from Liu [26] and trained on CTW-1500’s circumscribed rectangles.
                                                       Results for these methods are taken from Long [28] and are not fine-tuned on Total-Text.
                                                       This keypoints baseline is not applicable to Total-Text because of its varying number of vertices per ground truth polygon (see Sec. 5.6).
                                                       We initialize the network with weights learned from training on CTW-1500 for Total-Text and vice versa.

4.3 Dataset difference: instance-level information

Although Total-Text and CTW-1500 are both relevant to the study of curved text, the two datasets offer different levels of granularity in their ground truth information. As displayed in Figure 4, CTW-1500 groups words together if they are aligned, whereas Total-Text uses a different polygon for each individual word.

5 Experiments

In this section we evaluate and compare our tube parametrization and our trained text detector to the established benchmarks of the scene text detection literature.

5.1 Training

We initialize our network with a ResNet-50 backbone pre-trained on ImageNet. We train for a total of

steps with minibatches of 2 images from which we extract 512 RoI each. We use Stochastic Gradient Descent (SGD): during a warm-up period of

steps we ramp up the learning rate from one third to its full value of , which is divided by 10 at steps and . We use for weight decay and for momentum. Our region proposals’ aspect ratios are sampled in and we use levels 2 to 6 of the FPN pyramid.

During training we randomly resize the image to values in the [640, 800] pixel range. For inference we resize the image so that its shortest side is 800 if we are using mono-scale. For multi-scale inference we resize images from CTW-1500 to and Total-Text to

. To improve performance on rotated and vertical text, we also augment the data by applying a random 90 degrees rotation with probability


Figure 6: Example of tube inferences: we accurately capture both the straight and curved text regions with various orientations.

5.2 Post-Processing

Non-Maximal Suppression (NMS)

As is common in object detection frameworks, we use soft-NMS [2] at the IoU threshold on the rectangular outputs of the Faster R-CNN module, in order to suppress similar region-proposals and reduce the number of overlapping predictions that result in false positives. Soft-NMS results in a better performance than hard NMS as those boxes are the base for the regression of separate tubes that might share overlapping rectangular bounding boxes, in the case of real life nested text instances such as those of Fig. 1.

We want to avoid tubes that would overlap on close-by text instances: on top of bounding box soft-NMS, we use the traditional hard NMS between output tubes by computing their polygonal intersections.

5.3 Results

We achieve state-of-the-art results on CTW-1500 and Total-Text as shown in Table 1. While we achieve state-of-the-art for Total-Text, our performance is significantly better on CTW-1500 as the closest method achieves F-score whereas TextTubes’s best results reach . We address our understanding of the gap difference in Section 6. Average inference time on a V100 GPU is ms with multi-scale and ms without.

(a) CTW-1500
(b) Total-Text
Figure 7: Precision-Recall Curves, comparing the best results with the removal of polygonal post-processing (PP) and multi-scale (MS) and Section 5.6’s baseline keypoints approach.

5.4 Ablation study

As we can see in Table 2 and in Figure 7 we measure the difference in performance of TextTubes while removing the tube-specific polygonal post-processing and multi-scale, as well as TextTubes’s results on the curve and straight subsets (as defined in Sec. 4.1) of both datasets.

The Precision-Recall (PR) curves suggest that the impact of multi-scale is not very significant. On the other hand, post-processing has a large impact, specifically on CTW-1500. The impact of post-processing on CTW-1500 performance may be attributed to (1) the fact that CTW-1500’s line-level instances result in many more overlapping tube outputs than Total-Text’s word level predictions, and that accordingly (2) line-level polygonal tubes differ more from the Faster R-CNN output boxes than word-level tubes, which in turn causes polygonal NMS to impact CTW-1500 more than Total-Text.

Furthermore, the post-processing helps the network retain similar region-proposals that result in non-overlapping tube, in the case of nested instances (Fig 1), while discarding similar region-proposals that are processed into tubes for the same text instances.

5.5 Measuring performance on straight text

It is of paramount importance to assess whether our model performs well on the specific task of curved text detection, while also detecting straight text accurately. This is why we assess our method on the curved and straight subsets of CTW-1500 and Total-Text in Table 2. The performance on the curved and straight subsets cannot include neither precision nor F-score, as the other subset is discounted from the ground truth. Our ablation study indicates that our model captures of the CTW-1500’s curved instances within an IoU margin, while reaching up to of straight instances. The fact that TextTubes recovers seven points more of Total-Text’s curved instances than straight is due to the dataset containing almost twice as many straight instances, including many very short words composed of few characters that result in false negatives.

Achieving similar results on the curved and straight subsets illustrates the robustness of our method.

5.6 Baseline: learning polygon vertices independently

A relevant baseline to consider is to naively use the Mask R-CNN architecture to estimate the vertices of the ground-truth polygon as individual keypoints, rather than outputting medial points that follow our tube parametrization through TextTubes. Our comparison baseline has only limited applicability, since it assumes a fixed number of points on the ground-truth, which, while true for CTW-1500, does not hold for other datasets (Total-Text). In Table

1, we show that TextTubes outperforms this approach by points in F-score. This may be due to TextTubes’s loss function being invariant to reparametrization, while the naïve baseline loss, being defined independently on each point, does not capture global properties of the text instance. Fig. 8 shows for several images that the naïve approach often sees one of the 14 vertices as wrongly predicted, which in turn causes the entire prediction to be missed, by having intersecting sides. The increase in performance when comparing this naïve baseline to TextTubes results showcases the advantage of using our introduced tube loss.

Ablation study CTW-1500 Total-Text
(%) P (%) R (%) F (%) (%) P (%) R (%) F (%)
straight subset 95.60 - - - 86.24 - - -
curved subset 90.50 - - - 93.71 - - -
no post-process. 93.20 72.5 69.1 70.8 88.90 81.13 74.30 77.57
no multi-scale 92.30 84.8 79.0 81.8 85.00 81.36 74.20 77.62
Best results 93.00 87.65 80.00 83.65 88.90 84.25 73.50 78.51
Table 2: Ablation study of TextTubes on CTW-1500 and Total-Text. No post-processing indicates the removal of polygonal NMS. We do not feature the precision (and F-score) for the straight and curved subset because of false positives from excluded subsets. describes what percentage of the datasets’ test instances we manage to capture.
Figure 8: Inferences. The first row contains the original images, the second row shows the naïve baseline results (see Sec. 5.6) and the third is TextTubes’s outputs. Treating each vertex as an individual keypoint often causes the detector to miss because of one keypoint being located on the wrong part of an accurate region proposal, for the rightmost picture the top-left keypoint is wrongly located on the ”B”. We color each keypoint differently in the second row to make this more visual.

6 Discussion

Modeling an instance’s medial axis and average radius instead of directly inferring the associated polygon achieves two main goals: (1) it is a robust way to compute a loss function on the text instance’s shape (it wouldn’t be obvious how to compute a polygon-based loss without computing the loss on a vertex-to-vertex basis); (2) it captures information about the instance overall and does not overfit to a given dataset’s choice of representation, which would cause the features to be locally restricted to specific keypoints-to-keypoints mapping along the text instances.

Further, we would like to highlight how our model performs on text instances that are individual words vs. lines of words. On datasets that consist of individual words, such as Total-Text, our model is able to achieve state-of-the-art performance. On datasets that have line-level annotations, such as CTW-1500, our model is able to better capture textual information along an instance’s separate words. This improvement can be seen in our results on CTW-1500, where we have a significant improvement over state-of-the-art.

We can foresee further quality and precision improvements to our method by improving the underlying region proposal network (based on Faster R-CNN). Though it is typically sufficient to use axis-aligned boxes for scene text, even better performance could possibility be achieved with the addition of our tube-loss by including rotated bounding box proposals (e.g. through rotated anchor/default-boxes) as shown in the work of Ma [31].

7 Conclusion

We propose a tube representation for scene text instances of various shapes and train a TextTubes curved text detector to learn it. This two-stage network regresses the medial axis and the radius of the text instances for each Region-of-Interest of the region-proposal module, from which an accurate tube is computed. Our method sees improvements on state-of-the-art results on the established curved text benchmarks CTW-1500 and Total-Text.

While this tube representation is particularly relevant for text instances, it could be adapted to other tasks where the medial axis is complex yet relevant, such as pose estimation.


  • [1] G. Apollinaire. Calligrammes. Flammarion, 2018.
  • [2] N. Bodla, B. Singh, R. Chellappa, and L. S. Davis. Soft-nms improving object detection with one line of code. Proceedings of the International Conference on Computer Vision (ICCV), 2017.
  • [3] M. Busta, L. Neumann, and J. Matas. Deep textspotter: An end-to-end trainable scene text localization and recognition framework. Proceedings of the International Conference on Computer Vision (ICCV), 2017.
  • [4] C. K. Chng and C. S. Chan. Total-text: A comprehensive dataset for scene text detection and recognition. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), 2017.
  • [5] Y. Dai, Z. Huang, Y. Gao, and K. Chen. Fused text segmentation networks for multi-oriented scene text detection. arXiv preprint arXiv:1709.03272, 2017.
  • [6] D. Deng, H. Liu, X. Li, and D. Cai. Pixellink: Detecting scene text via instance segmentation.

    Proceedings of AAAI Conference on Artificial Intelligence (AAAI)

    , 2018.
  • [7] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International Journal of Computer Vision (IJCV), 88(2):303–338, 2010.
  • [8] D. He, X. Yang, C. Liang, Z. Zhou, A. G. Ororbia, D. Kifer, and C. L. Giles. Multi-scale fcn with cascaded instance aware segmentation for arbitrary oriented word spotting in the wild.

    Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)

    , pages 474–483, 2017.
  • [9] K. He, G. Gkioxari, P. Dollár, and R. B. Girshick. Mask R-CNN. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [10] P. He, W. Huang, T. He, Q. Zhu, Y. Qiao, and X. Li. Single shot text detector with regional attention. Proceedings of the International Conference on Computer Vision (ICCV), pages 3066–3074, 2017.
  • [11] G. Hess. Calligramme., 2015.
  • [12] H. Hu, C. Zhang, Y. Luo, Y. Wang, J. Han, and E. Ding. Wordsup: Exploiting word annotations for character based text detection. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [13] W. Huang, Z. Lin, J. Yang, and J. Wang. Text localization in natural images using stroke feature transform and text covariance descriptors. Proceedings of the International Conference on Computer Vision (ICCV), pages 1241–1248, 2013.
  • [14] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman.

    Reading text in the wild with convolutional neural networks.

    International Journal of Computer Vision (IJCV), 116(1):1–20, 2016.
  • [15] Y. Jiang, X. Zhu, X. Wang, S. Yang, W. Li, H. Wang, P. Fu, and Z. Luo. R2CNN: rotational region cnn for orientation robust scene text detection. arXiv preprint arXiv:1706.09579, 2017.
  • [16] D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, et al. ICDAR 2015 competition on robust reading. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), pages 1156–1160, 2015.
  • [17] D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i. Bigorda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazàn, and L. P. de las Heras. ICDAR 2013 competition on robust reading. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), 2013.
  • [18] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, T. Duerig, and V. Ferrari. The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982, 2018.
  • [19] M. Liao, B. Shi, and X. Bai. Textboxes++: A single-shot oriented scene text detector. IEEE Transactions on Image Processing, 27(8):3676–3690, 2018.
  • [20] M. Liao, B. Shi, X. Bai, X. Wang, and W. Liu. Textboxes: A fast text detector with a single deep neural network. Proceedings of AAAI Conference on Artificial Intelligence (AAAI), pages 4161–4167, 2017.
  • [21] M. Liao, Z. Zhu, B. Shi, G.-s. Xia, and X. Bai. Rotation-sensitive regression for oriented scene text detection. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 5909–5918, 2018.
  • [22] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 936–944, 2017.
  • [23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. Proceedings of the European Conference on Computer Vision (ECCV), pages 21–37, 2016.
  • [24] X. Liu, D. Liang, S. Yan, D. Chen, Y. Qiao, and J. Yan. Fots: Fast oriented text spotting with a unified network. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [25] Y. Liu and L. Jin. Deep matching prior network: Toward tighter multi-oriented text detection. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • [26] Y. Liu, L. Jin, S. Zhang, and S. Zhang. Detecting curve text in the wild: New dataset and new solution. CoRR, abs/1712.02170, 2017.
  • [27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [28] S. Long, J. Ruan, W. Zhang, X. He, W. Wu, and C. Yao. TextSnake: A Flexible Representation for Detecting Text of Arbitrary Shapes. Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [29] S. M. Lucas, A. Panaretos, L. Sosa, A. Tang, S. Wong, and R. Young. ICDAR 2003 competition on robust reading. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), 2003.
  • [30] P. Lyu, M. Liao, C. Yao, W. Wu, and X. Bai. Mask TextSpotter: An End-to-End Trainable Neural Network for Spotting Text with Arbitrary Shapes. Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • [31] J. Ma, W. Shao, H. Ye, L. Wang, H. Wang, Y. Zheng, and X. Xue. Arbitrary-oriented scene text detection via rotation proposals. IEEE Transactions on Multimedia, 2018.
  • [32] N. Nayef, F. Yin, I. Bizid, H. Choi, Y. Feng, D. Karatzas, Z. Luo, U. Pal, C. Rigaud, J. Chazalon, W. Khlif, M. M. Luqman, J. Burie, C. Liu, and J. Ogier. ICDAR 2017 robust reading challenge on multi-lingual scene text detection and script identification - RRC-MLT. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), Nov 2017.
  • [33] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 779–788, 2016.
  • [34] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 6517–6525, 2017.
  • [35] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems (NeurIPS), pages 91–99, 2015.
  • [36] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention, pages 234–241, 2015.
  • [37] B. Shi, X. Bai, and S. Belongie. Detecting oriented text in natural images by linking segments. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 3482–3490, 2017.
  • [38] S. Tian, S. Lu, and C. Li. Wetext: Scene text detection under weak supervision. Proceedings of the International Conference on Computer Vision (ICCV), 2017.
  • [39] Z. Tian, W. Huang, T. He, P. He, and Y. Qiao. Detecting text in natural image with connectionist text proposal network. Proceedings of the European Conference on Computer Vision (ECCV), 2016.
  • [40] A. Veit, T. Matera, L. Neumann, J. Matas, and S. Belongie. Coco-text: Dataset and benchmark for text detection and recognition in natural images. arXiv preprint arXiv:1601.07140, 2016.
  • [41] W. Wathen-Dunn. Models for the perception of speech and visual form: proceedings of a symposium. pages 364–368, 1967.
  • [42] Y. Wu and P. Natarajan. Self-organized text detection with minimal post-processing via border learning. Proceedings of the International Conference on Computer Vision (ICCV), pages 5010–5019, 2017.
  • [43] C. Xue, S. Lu, and F. Zhan. Accurate scene text detection through border semantics awareness and bootstrapping. Proceedings of the European Conference on Computer Vision (ECCV), pages 370–387, 2018.
  • [44] Q. Yang, M. Cheng, W. Zhou, Y. Chen, M. Qiu, and W. Lin. Inceptext: A new inception-text module with deformable psroi pooling for multi-oriented scene text detection. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2018.
  • [45] C. Yao, X. Bai, W. Liu, Y. Ma, and Z. Tu. Detecting texts of arbitrary orientations in natural images. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 1083–1090, 2012.
  • [46] Q. Ye and D. Doermann. Text detection and recognition in imagery: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 37(7):1480–1500, 2015.
  • [47] X.-C. Yin, X. Yin, K. Huang, and H.-W. Hao. Robust text detection in natural scene images. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(5):970–983, 2014.
  • [48] X. Yue, Z. Kuang, Z. Zhang, Z. Chen, P. He, Y. Qiao, and W. Zhang. Boosting up scene text detectors with guided cnn. Proceedings of British Machine Vision Conference (BMVC), 2018.
  • [49] Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, and X. Bai. Multi-oriented text detection with fully convolutional networks. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 4159–4167, 2016.
  • [50] X. Zhou, C. Yao, H. Wen, Y. Wang, S. Zhou, W. He, and J. Liang. EAST: an efficient and accurate scene text detector. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pages 2642–2651, 2017.
  • [51] X. Zhu, Y. Jiang, S. Yang, X. Wang, W. Li, P. Fu, H. Wang, and Z. Luo. Deep residual text detection network for scene text. Proceedings of the International Conference on Document Analysis and Recognition (ICDAR), 1:807–812, 2017.
  • [52] Y. Zhu and J. Du. Sliding line point regression for shape robust scene text detection. CoRR, abs/1801.09969, 2018.