DeepText: A Unified Framework for Text Proposal Generation and Text Detection in Natural Images

05/24/2016 ∙ by Zhuoyao Zhong, et al. ∙ 0

In this paper, we develop a novel unified framework called DeepText for text region proposal generation and text detection in natural images via a fully convolutional neural network (CNN). First, we propose the inception region proposal network (Inception-RPN) and design a set of text characteristic prior bounding boxes to achieve high word recall with only hundred level candidate proposals. Next, we present a powerful textdetection network that embeds ambiguous text category (ATC) information and multilevel region-of-interest pooling (MLRP) for text and non-text classification and accurate localization. Finally, we apply an iterative bounding box voting scheme to pursue high recall in a complementary manner and introduce a filtering algorithm to retain the most suitable bounding box, while removing redundant inner and outer boxes for each text instance. Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, outperforming previous state-of-the-art results.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 5

page 8

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Text detection is a procedure that determines whether text is present in natural images and, if it is, where each text instance is located. Text in images provides rich and precise high-level semantic information, which is important for numerous potential applications such as scene understanding, image and video retrieval, and content-based recommendation systems. Consequently, text detection in natural scenes has attracted considerable attention in the computer vision and image understanding community

[Wang et al.(2012)Wang, Wu, Coates, and Ng, Jaderberg et al.(2014)Jaderberg, Vedaldi, and Zisserman, Neumann and Matas(2010), Yin et al.(2014)Yin, Yin, Huang, and Hao, Huang et al.(2014)Huang, Qiao, and Tang, Sun et al.(2015)Sun, Huo, and jia, Epshtein et al.(2010)Epshtein, Ofek, and Y.Wexler, Karatzas et al.(2011)Karatzas, Mestre, Mas, Nourbakhsh, and Roy, Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, , and de las Heras, Jaderberg et al.(2016)Jaderberg, Simonyan, Vedaldi, and Zisserman, Tian et al.(2015)Tian, Pan, Huang, Lu, Yu, , and Tan, Zhang et al.(2016)Zhang, Lin, Chen, Jin, and Lin]. However, text detection in the wild is still a challenging and unsolved problem because of the following factors. First, a text image background is very complex and some region components such as signs, bricks, and grass are difficult to distinguish from text. Second, scene text can be diverse and usually exits in various colors, fonts, orientations, languages, and scales in natural images. Furthermore, there are highly confounding factors, such as non-uniform illumination, strong exposure, low contrast, blurring, low resolution, and occlusion, which pose hard challenges for the text detection task.

Figure 1: Pipeline architecture of DeepText. Our approach takes a natural image as input, generates hundreds of word region proposals via Inception-RPN (Stage 1), and then scores and refines each word proposal using the text detection network (Stage 2).

In the last few decades, sliding window-based and connected component-based methods have become mainstream approaches to the text detection problem. Sliding window-based methods [Wang et al.(2012)Wang, Wu, Coates, and Ng, Jaderberg et al.(2014)Jaderberg, Vedaldi, and Zisserman] use different ratios and scales of sliding windows to search for the presence of possible text positions in pyramid images, incurring a high computational cost. Connected component based methods, represented by maximally stable extremal regions (MSERs) [Neumann and Matas(2010), Yin et al.(2014)Yin, Yin, Huang, and Hao, Huang et al.(2014)Huang, Qiao, and Tang, Sun et al.(2015)Sun, Huo, and jia] and the stroke width transform (SWT) [Epshtein et al.(2010)Epshtein, Ofek, and Y.Wexler], extract character candidates and group them into word or text lines. In particular, previous approaches applying MSERs as the basic representation have achieved promising performance in the ICDAR 2011 and 2013 robust text detection competitions [Karatzas et al.(2011)Karatzas, Mestre, Mas, Nourbakhsh, and Roy, Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, , and de las Heras]. However, MSERs focuses on low-level pixel operations and mainly accesses local character component information, which leads to poor performance in some challenging situations, such as multiple connected characters, segmented stroke characters, and non-uniform illumination, as mentioned in [Zhang et al.(2016)Zhang, Lin, Chen, Jin, and Lin]. Further, this bottom-up approach gives rise to sequential error accumulation in the total text detection pipeline, as stated in [Tian et al.(2015)Tian, Pan, Huang, Lu, Yu, , and Tan].

Rather than extract character candidates, Jaderberg et al. [Jaderberg et al.(2016)Jaderberg, Simonyan, Vedaldi, and Zisserman] applied complementary region proposal methods called edge boxes (EB) [Zitnick and Dollár(2014)] and aggregate channel feature (ACF) [Dollár et al.(2014)Dollár, Appel, Belongie, and Perona]

to perform word detection and acquired a high word recall with tens of thousands of word region proposals. They then employed HOG features and a random forest classifier to remove non-text region proposals and hence improve precision. Bounding box regression was also used for more accurate localization. Finally, using a large pre-trained convolutional neural network (CNN) to recognize the detected word-cropped images, they achieved superior text spotting and text-based image retrieval performance on several standard benchmarks..

Actually, the region proposal generation step in the generic object detection pipeline has attracted much interest. In recent studies, object detection models based on region proposal algorithms to hypothesize class-specific or class-agnostic object locations have achieved state-of-the-art detection performance [Girshick et al.(2014)Girshick, Donahue, Darrell, and Malik, Girshick(2015), He et al.(2014)He, Zhang, Ren, and Sun, Gidaris and Komodakis(2015)]. However, standard region proposal algorithms such as selective search (SS) [de Sande et al.(2011)de Sande, Uijlings, Gevers, and Smeulders], MCG [Arbelaez et al.(2014)Arbelaez, Pont-Tuset, Barron, Marques, and Malik], EB [Zitnick and Dollár(2014)], generate an extremely large number of region proposals. This leads to high recall, but burdens the follow-up classification and regression models and is also relatively time-consuming. In order to address these issues, Ren et al. [Ren et al.(2015)Ren, He, Girshick, and Sun] proposed region proposal networks (RPNs), which computed region proposals with a deep fully CNN. They generated fewer region proposals, but achieved a promising recall rate under different overlap thresholds. Moreover, RPN and Fast R-CNN can be combined into a joint network and trained to share convolutional features. Owing to the above innovation, this approach achieved better object detection accuracy in less time than Fast R-CNN with SS [Girshick(2015)] on PASCAL VOC 2007 and 2012.

In this paper, inspired by [Ren et al.(2015)Ren, He, Girshick, and Sun]

, our motivation is to design a unified framework for text characteristic region proposal generation and text detection in natural images. In order to avoid the sequential error accumulation of bottom-up character candidate extraction strategies, we focus on word proposal generation. In contrast to previous region proposal methods that generate thousands of word region proposals, we are motivated to reduce this number to hundreds while maintaining a high word recall. To accomplish this, we propose the novel inception RPN (Inception-RPN) and design a set of text characteristic prior bounding boxes to hunt high-quality word region proposals. Subsequently, we present a powerful text detection network by incorporating extra ambiguous text category (ATC) information and multi-level region of interest (ROI) pooling into the optimization process. Finally, by means of some heuristic processing, including an iterative bounding box voting scheme and filtering algorithm to remove redundant boxes for each text instance, we achieve our high-performance text detection system, called DeepText. An overview of DeepText is shown in Fig. 1. Our contributions can be summarized by the following points.

(1) We propose inception-RPN, which applies multi-scale sliding windows over the top of convolutional feature maps and associates a set of text characteristic prior bounding boxes with each sliding position to generate word region proposals. The multi-scale sliding-window feature can retain local information as well as contextual information at the corresponding position, which helps to filter out non-text prior bounding boxes. Our Inception-RPN enables achieving a high recall with only hundreds of word region proposals.

(2) We introduce the additional ATC information and multi-level ROI pooling (MLRP) into the text detection network, which helps it to learn more discriminative information for distinguishing text from complex backgrounds.

(3) In order to make better use of intermediate models in the overall training process, we develop an iterative bounding box voting scheme, which obtains high word recall in a complementary manner. Besides, based on empirical observation, multiple inner boxes or outer boxes may simultaneously exist for one text instance. To tackle this problem, we use a filtering algorithm to keep the most suitable bounding box and remove the remainders.

(4) Our approach achieves an F-measure of 0.83 and 0.85 on the ICDAR 2011 and 2013 robust text detection benchmarks, respectively, outperforming the previous state-of-the-art results.

The remainder of this paper is set out as follows. The proposed methodology is described in detail in Section 2. Section 3 presents our experimental results and analysis. Finally, the conclusion is given in Section 4.

2 Methodology

2.1 Text region proposal generation

Our inception-RPN method resembles the notion of RPN proposed in [Ren et al.(2015)Ren, He, Girshick, and Sun], which takes a natural scene image and set of ground-truth bounding boxes that mark text regions as input and generates a manageable number of candidate word region proposals. To search for word region proposals, we slide an inception network over the top of convolutional feature maps (Conv5_3) in the VGG16 model [Simonyan and Zisserman(2015)] and associate a set of text characteristic prior bounding boxes with each sliding position. The details are as follows.

Text characteristic prior bounding box design. Our prior bounding boxes are similar to the anchor boxes defined in RPN. Taking text characteristics into consideration, for most word or text line instances, width is usually greater than height; in other words, their aspect ratios are usually less than one. Furthermore, most text regions are small in natural images. Therefore, we empirically design four scales (32, 48, 64, and 80) and six aspect ratios (0.2, 0.5, 0.8, 1.0, 1.2, and 1.5), for a total of prior bounding boxes at each sliding position, which is suitable for text properties as well as incident situations. In the learning stage, we assign a positive label to a prior box that has an intersection over union (IoU) overlap greater than 0.5 with a ground-truth bounding box, while assigning a background label to a prior box with an IoU overlap less than 0.3 with any ground-truths.

Inception-RPN.

We design Inception-RPN, inspired by the idea of the inception module in GoogLeNet

[Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich]

, which used flexible convolutional or pooling kernel filter sizes with a layer-by-layer structure to achieve local feature extraction. This method has proved to be robust for large-scale image classification. Our designed inception network consists of a

convolution, convolution and max pooling layers, which is fully connected to the corresponding spatial receptive fields of the input Conv5_3 feature maps. That is, we apply convolution, convolution and max pooling to extract local featire representation over Conv5_3 feature maps at each sliding position simultaneously. In addition, convolution is employed on the top of

max pooling layer for dimension reduction. We then concatenate each part feature along the channel axis and a 640-d concatenated feature vector is fed into two sibling output layers: a classification layer that predicts textness score of the region and a regression layer that refines the text region location for each kind of prior bounding box at this sliding position. An illustration of Inception-RPN is shown in the top part of Fig. 1. Inception-RPN has the following advantages: (1) the multi-scale sliding-window feature can retain local information as well as contextual information thanks to its center restricted alignment at each sliding position, which helps to classify text and non-text prior bounding boxes, (2) the coexistence of convolution and pooling is effective for more abstract representative feature extraction, as addressed in

[Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich], and (3) experiments shows that Inception-RPN substantially improves word recall at different IoU thresholds with the same number of word region proposals.

Note that for a Conv5_3 feature map of size , Inception-RPN generates prior bounding boxes as candidate word region proposals, some of which are redundant and highly overlap with others. Therefore, after each prior bounding box is scored and refined, we apply non-maximum suppression (NMS) [Neubeck and Gool(2006)] with an IoU overlap threshold of 0.7 to retain the highest textness score bounding box and rapidly suppress the lower scoring boxes in the neighborhood. We next select the top-2000 candidate word region proposals for the text detection network.

2.2 Text Detection

ATC incorporation. As in many previous works (e.g., [Ren et al.(2015)Ren, He, Girshick, and Sun]), a positive label is assigned to a proposal that has an IoU overlap greater than 0.5 with a ground truth bounding box, while a background label is assigned to a proposal that has an IoU overlap in the range with any ground-truths in the detection network. However, this method of proposal partitioning is unreasonable for text because a proposal with an IoU overlap in the interval

may probably contain partial or extensive text information, as shown in Fig. 2. We note that promiscuous label information may confuse the learning of the text and non-text classification network. To tackle this issue, we refine this proposal label partition strategy to make it suitable for text classification. Hence, we assign a positive text label to a proposal that has an IoU overlap greater than 0.5 with a ground truth, while assigning an additional "ambiguous text" label to a proposal that has an IoU overlap with a ground truth bounding box in the range

. In addition, a background label is assigned to any proposal that has an IoU overlap of less than 0.2 with any ground-truths. We assume that more reasonable supervised information incorporation helps the classifier to learn more discriminative feature to distinguish text from complex and diverse backgrounds and filter out non-text region proposals.

Figure 2: Example word region proposals with an IoU overlap within the interval .

MLRP. The ROI pooling procedure performs adaptive max pooling and outputs a max-pooled feature with the original channels and spatial extents for each bounding box.Previous state-of-the-art object detection models such as SPP-Net [He et al.(2014)He, Zhang, Ren, and Sun], fast-RCNN [Girshick(2015)], faster-RCNN [Ren et al.(2015)Ren, He, Girshick, and Sun], all simply apply ROI pooling over the last convolutional layer (Conv5_3) in the VGG16 model. However, to better utilize the multi-level convolutional features and enrich the discriminant information of each bounding box, we perform MLRP over the Conv4_3 as well as Conv5_3 convolutional feature maps of the VGG16 network and obtain two pooled feature (both and are set to 7 in practice). We apply channel concatenation on each pooled feature and encode concatenated feature with convolutional layer. The convolutional layer is: (1) combines the multi-level pooled features and learns the fusion weights in the training process and (2) reduces the dimensions to match VGG16’s first fully-connected layer. The multi-level weighted fusion feature is then accessed to the follow-up bounding box classification and regression model. An illustration of MLRP is depicted in the bottom half of Fig. 1.

2.3 End-to-end learning optimization

Both Inception-RPN and the text detection network have two sibling output layers: a classification layer and a regression layer. The difference between them is as follows: (1) For Inception-RPN, each kind of prior bounding box should be parameterized independently, so we need to predict all of the prior bounding boxes simultaneously. The classification layer outputs scores textness scores that evaluate the probability of text or non-text for each proposal, while the regression layer outputs

values that encode the offsets of the refined bounding box. (2) For the text detection network, there are three output scores corresponding to the background, ambiguous text, and positive text categories and four bounding box regression offsets for each positive text proposal (only positive text region proposals access the bounding regression model). We minimize a multi-task loss function, as in

[Girshick et al.(2014)Girshick, Donahue, Darrell, and Malik]:

(1)

where classification loss is a softmax loss and and are given as the predicted and true labels, respectively. Regression loss applies smooth- loss defined in [Girshick(2015)]. Besides, and stand for predicted and ground-truth bounding box regression offset vector respectively, where is encoded as follows:

(2)

Here, and denote the center coordinates (x-axis and y-axis), width, and height of proposal and ground-truth box , respectively. Furthermore, is a loss-balancing parameter, and we set for Inception-RPN to bias it towards better box locations and for text detection network.

In contrast to the proposed four-step training strategy to combine RPN and Fast-RCNN in [Ren et al.(2015)Ren, He, Girshick, and Sun]

, we train our inception-RPN and text detection network in an end-to-end manner via back-propagation and stochastic gradient descent (SGD), as given in Algorithm 1. The shared convolutional layers are initialized by a pre-trained VGG16 model for imageNet classification

[Simonyan and Zisserman(2015)]

. All the weights of the new layers are initialized with a zero mean and a standard deviation of 0.01 Gaussian distribution. The base learning rate is 0.001 and is divided by 10 for each 40K mini-batch until convergence. We use a momentum of 0.9 and weight decay of 0.0005. All experiments were conducted in Caffe

[Jia et al.(2014)Jia, Shelhamer, Donahue, Karayev, Long, Girshick, Guadarrama, and Darrell].

2.4 Heuristic processing

Iterative bounding box voting. In order to make better use of the intermediate models in the total training process, we propose an iterative bounding box voting scheme, which can be considered as a simplified version of the method mentioned in [Gidaris and Komodakis(2015)]. We use to denote the set of detection candidates generated for specific positive text class in image on iteration , where the i-th bounding box and is the corresponding textness score. For , we merge each iteration detection candidate set together and generate . We then adopt NMS [Neubeck and Gool(2006)] on with an IoU overlap threshold of 0.3 to suppress low-scoring window boxes. In this way, we can obtain a high recall of text instances in a complementary manner and improve the performance of the text detection system.

Filtering. Based on empirical observation, we note that even after NMS [Neubeck and Gool(2006)] processing, multiple inner boxes or outer boxes may still exist for one text instance in the detection candidate set, which may severely harm the precision of the text detection system. To address this problem, we present a filtering algorithm that finds the inner and outer bounding boxes of each text instance in terms of coordinate position, preserves the bounding box with the highest textness score, and removes the others. Thus, we can remove redundant detection boxes and substantially improve precision.

3 Experiments and Analysis

0:    Set of training images with ground-truths: ; learning rate ; samples number ; iteration number .
0:    Separate network parameters for the shared convolutional layer, inception-RPN and text detection network.
1:  Randomly select one sample and produce prior bounding boxes classification labels and bounding box regression targets according to ;
2:  Randomly sample positive and negative prior bounding box from to compute the loss function in equations (1);
3:  Run backward propagation to obtain the gradient with respect to network parameters and obtain the word proposal set ;
4:  Adopt NMS with the setting IoU threshold on and select the top-k ranked proposals to construct for Step 5;
5:  Randomly sample positive text, ambiguous text and background word region proposals from to compute the loss function in equations (1);
6:  Run backward propagation to obtain the gradient with respect to network parameters: ;
7:  update network parameters: , , ;
8:  , if the network has converged,output network parameters and end the procedure; otherwise, return the Step 1.
Algorithm 1 End-to-end optimization method for the DeepText training process.

3.1 Experiments Data

The ICDAR 2011 dataset includes 229 and 255 images for training and testing, respectively, and there are 229 training and 233 testing images in the ICDAR 2013 dataset. Obviously, the number of training image is constrained to train a reasonable network. In order to increase the diversity and number of training samples, we collect an indoor database that consisted of 1,715 natural images for text detection and recognition from the Flickr website, which is publicly available online111https://www.dropbox.com/s/06wfn5ugt5v3djs/SCUT_FORU_DB_Release.rar?dl=0 and free for research usage. In addition, we manually selected 2,028 images from the COCO-Text benchmark [Veit et al.(2016)Veit, Matera, Neumann, Matas, and Belongie]. Ultimately, we collected 4,072 training images in total.

3.2 Evaluation of Inception-RPN

In this section, we compare Inception-RPN with the text characteristic prior bounding boxes (Inception-RPN-TCPB) to state-of-the-art region proposal algorithms, such as SS [de Sande et al.(2011)de Sande, Uijlings, Gevers, and Smeulders], EB [Zitnick and Dollár(2014)] and standard RPN [Ren et al.(2015)Ren, He, Girshick, and Sun]. We compute the word recall rate of word region proposals at different IoU overlap thresholds with ground-truth bounding boxes on the ICDAR 2013 testing set, which includes 1095 word-level annotated text regions. In Fig. 3, we show the results of using N= 100, 300, 500 word region proposals, where the N proposals are the top-N scoring word region proposals ranked in term of these methods. The plots demonstrate that our Inception-RPN-TCPB considerably outperforms standard RPN by 8%-10% and is superior to SS and EB with a notable improvement when the number of word region proposals drops from 500 to 100. Therefore, our proposed Inception-RPN-TCPB is capable of achieving a high recall of nearly 90% with only hundreds of word region proposals. Moreover, the recall rate of using 300 word region proposals approximates that of using 500 word region proposals, so we simply use the top-300 word region proposals for the text detection network at test time.

3.3 Analysis of text detection network

In this section, we investigate the effect of ATC incorporation and MLRP on the text detection network. First, we use our proposed Inception-RPN-TCPB to generate 300 word region proposals for each image in the ICDAR 2013 testing set. Next, we assign a positive label to word region proposals that have an IoU overlap greater than 0.5 with a ground-truth bounding box, while assigning a negative label to proposals that has an IoU overlap with any ground-truths of less than 0.5. In total, we collected 8,481 positive word region proposals and 61,419 negative word region proposals. We then evaluated the true positive (TP) rate and false positive (FP) rate of the baseline model and model employing ATC and MLRP. The results are shown in Table 1. It can be seen that the model using ATC and MLRP increase the TP rate by 3.13% and decrease the FP rate by 0.82%, which shows that the incorporation of more reasonable supervised and multi-level information is effective for learning more discriminative features to distinguish text from complex and diverse backgrounds.

Figure 3: Recall vs. IoU overlap threshold on the ICDAR 2013 testing set. Left: 100 word region proposals. Middle: 300 word region proposals. Right: 500 word region proposals.
Model TP(%) FP(%)
Baseline model 85.61 11.20
ATC+MLRP 88.74 10.38
Table 1: Performance evaluation of ATC and MLPB based on TP and FP rate.

3.4 Experimental results on full text detection

We evaluate the proposed DeepText detection system on the ICDAR 2011 and 2013 robust text detection benchmarks following the standard evaluation protocol of ICDAR 2011 [Wolf and Jolion(2006)] and 2013 [Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, , and de las Heras]. Our DeepText system achieves 0.83 and 0.85 F-measure on the ICDAR 2011 and 2013 datasets. Comparisons with recent methods on the ICDAR 2011 and 2013 benchmarks are shown in Tables 2 and 3. It is worth to note that though Sun et al. [Sun et al.(2015)Sun, Huo, and jia] achieved superior results on the ICDAR 2011 and 2013 datasets, their method is not comparable because they used millions of additional samples for training, while we only used 4072 training samples. In the tables, we can see that our proposed approach outperforms previous results with a substantial improvement, which can be attributed to simultaneously taking high recall and precision into consideration in our system. The High performance achieved on both datasets highlights the robustness and effectiveness of our proposed approach. Further, qualitative detection results under diverse challenging conditions are shown in Fig. 4, which demonstrates that our system is capable of detecting non-uniform illumination, multiple and small regions, as well as low contrast text regions in natural images. In addition, our system takes 1.7 s for each image on average when using a single GPU K40.

Method Year Precision Recall F-measure
DeepText (ours) N/A 0.85 0.81 0.83
TextFlow [Tian et al.(2015)Tian, Pan, Huang, Lu, Yu, , and Tan] ICCV 2015 0.86 0.76 0.81
Zhang et al. [Zhang et al.(2015)Zhang, Shen, Yao, and Bai] CVPR 2015 0.84 0.76 0.80
MSERs-CNN [Huang et al.(2014)Huang, Qiao, and Tang] ECCV 2014 0.88 0.71 0.78
Yin et al. [Yin et al.(2014)Yin, Yin, Huang, and Hao] TPAMI 2014 0.86 0.68 0.75
Faster-RCNN [Ren et al.(2015)Ren, He, Girshick, and Sun] NIPS 2015 0.74 0.71 0.72
Table 2: Comparison with state-of-the-art methods on the ICDAR 2011 benchmark.
Method Year Precision Recall F-measure
DeepText (ours) N/A 0.87 0.83 0.85
TextFlow [Tian et al.(2015)Tian, Pan, Huang, Lu, Yu, , and Tan] ICCV 2015 0.85 0.76 0.80
Zhang et al. [Zhang et al.(2015)Zhang, Shen, Yao, and Bai] CVPR 2015 0.88 0.74 0.80
Lu et al. [Lu et al.(2015)Lu, Chen, Tian, Lim, and Tan] IJDAR 2015 0.89 0.70 0.78
Neumann et al.[Neumann and Matas(2015)] ICDAR 2015 0.82 0.72 0.77
FASText [Busta et al.(2015)Busta, Neumann, and Matas] ICCV 2015 0.84 0.69 0.77
Iwrr2014 [Zamberletti et al.(2014)Zamberletti, Noce, and Gallo] ACCVW 2014 0.86 0.70 0.77
Yin et al. [Yin et al.(2014)Yin, Yin, Huang, and Hao] TPAMI 2014 0.88 0.66 0.76
Text Spotter [Neumann and Matas(2012)] CVPR 2012 0.88 0.65 0.75
Faster-RCNN [Ren et al.(2015)Ren, He, Girshick, and Sun] NIPS 2015 0.75 0.71 0.73
Table 3: Comparison with state-of-art methods on the ICDAR 2013 benchmark.
Figure 4: Example detection results of our DeepText system on the ICDAR 2011 and ICDAR 2013 benchmarks.

4 Conclusion

In this paper, we presented a novel unified framework called DeepText for text detection in natural images with a powerful fully CNN in an end-to-end learning manner. DeepText consists of an Inception-RPN with a set of text characteristic prior bounding boxes for high quality word proposal generation and a powerful text detection network for proposal classification and accurate localization. After applying an iterative bounding box voting scheme and filtering algorithm to remove redundant boxes for each text instance, we achieve our high-performance text detection system. Experimental results show that our approach achieves state-of-the-art F-measure performance on the ICDAR 2011 and 2013 robust text detection benchmarks, substantially outperforming previous methods. We note that there is still a large room for improvement with respect to recall and precision. In future, we plan to further enhance the recall rate of the candidate word region proposals and accuracy of the proposal classification and location regression.

References

  • [Arbelaez et al.(2014)Arbelaez, Pont-Tuset, Barron, Marques, and Malik] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In Proc. CVPR, 2014.
  • [Busta et al.(2015)Busta, Neumann, and Matas] M. Busta, L. Neumann, and J. Matas. Fastext: Efficient unconstrained scene text detector. In Proc. ICCV, 2015.
  • [de Sande et al.(2011)de Sande, Uijlings, Gevers, and Smeulders] K. E. Van de Sande, J. R. Uijlings, T. Gevers, and A. W. Smeulders. Segmentation as selective search for object recognition. In Proc. ICCV, 2011.
  • [Dollár et al.(2014)Dollár, Appel, Belongie, and Perona] P. Dollár, R. Appel, S. Belongie, and P. Perona. Fast feature pyramids for object detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(8):1532–1545, 2014.
  • [Epshtein et al.(2010)Epshtein, Ofek, and Y.Wexler] B. Epshtein, E. Ofek, and Y.Wexler. Detecting text in natural scenes with stroke width transform. In Proc. CVPR, 2010.
  • [Gidaris and Komodakis(2015)] S. Gidaris and N. Komodakis. Object detection via a multiregion & semantic segmentation-aware cnn model. In Proc. ICCV, 2015.
  • [Girshick(2015)] R. Girshick. Fast r-cnn. In Proc. ICCV, 2015.
  • [Girshick et al.(2014)Girshick, Donahue, Darrell, and Malik] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. CVPR, 2014.
  • [He et al.(2014)He, Zhang, Ren, and Sun] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In Proc. ECCV, 2014.
  • [Huang et al.(2014)Huang, Qiao, and Tang] W. Huang, Y. Qiao, and X. Tang. Robust scene text detection with convolutional neural networks induced mser trees. In Proc. ECCV, 2014.
  • [Jaderberg et al.(2014)Jaderberg, Vedaldi, and Zisserman] M. Jaderberg, A. Vedaldi, and A. Zisserman. Deep features for text spotting. In Proc. ECCV, 2014.
  • [Jaderberg et al.(2016)Jaderberg, Simonyan, Vedaldi, and Zisserman] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading text in the wild with convolutional neural networks. International Journal of Computer Vision, 116(1):1–20, 2016.
  • [Jia et al.(2014)Jia, Shelhamer, Donahue, Karayev, Long, Girshick, Guadarrama, and Darrell] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arxiv preprint arXiv:1408.5093, 2014.
  • [Karatzas et al.(2011)Karatzas, Mestre, Mas, Nourbakhsh, and Roy] D. Karatzas, S. Robles Mestre, J. Mas, F. Nourbakhsh, and P. Pratim Roy. Icdar 2011 robust reading competition. In Proc. ICDAR, 2011.
  • [Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, i Bigorda, Mestre, Mas, Mota, Almazan, , and de las Heras] D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i Bigorda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazan, , and L. P. de las Heras. Icdar 2013 robust reading competition. In Proc. ICDAR, 2013.
  • [Lu et al.(2015)Lu, Chen, Tian, Lim, and Tan] S. Lu, T. Chen, S. Tian, J. Lim, and C. Tan. Scene text extraction based on edges and support vector regression. International Journal on Document Analysis and Recognition, 18(2):125–135, 2015.
  • [Neubeck and Gool(2006)] A. Neubeck and L. Van Gool. Efficient non-maximum suppression. In Proc. ICPR, 2006.
  • [Neumann and Matas(2010)] L. Neumann and J. Matas. A method for text localization and recognition in real-world images. In Proc. ACCV, 2010.
  • [Neumann and Matas(2015)] L. Neumann and J. Matas. Efficient scene text localization and recognition with local character refinement. In Proc. ICDAR, 2015.
  • [Neumann and Matas(2012)] L. Neumann and K. Matas. Real-time scene text localization and recognition. In Proc. CVPR, 2012.
  • [Ren et al.(2015)Ren, He, Girshick, and Sun] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proc. NIPS, 2015.
  • [Simonyan and Zisserman(2015)] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2015.
  • [Sun et al.(2015)Sun, Huo, and jia] L. Sun, Q. Huo, and W. jia. A robust approach for text detection from natural scene images. Pattern Recognition, 48(9):2906–2920, 2015.
  • [Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proc. CVPR, 2015.
  • [Tian et al.(2015)Tian, Pan, Huang, Lu, Yu, , and Tan] S. Tian, Y. Pan, C. Huang, S. Lu, K. Yu, , and C. L. Tan. Textflow: A unified text detection system in natural scene images. In Proc. ICCV, 2015.
  • [Veit et al.(2016)Veit, Matera, Neumann, Matas, and Belongie] A. Veit, T. Matera, L. Neumann, J. Matas, and S. Belongie. Coco-text: Dataset and benchmark for text detection and recognition in natural images. arxiv preprint arXiv:1601.07140, 2016.
  • [Wang et al.(2012)Wang, Wu, Coates, and Ng] T. Wang, D. J. Wu, A. Coates, and A. Y. Ng. End-to-end text recognition with convolutional neural networks. In Proc. ICPR, 2012.
  • [Wolf and Jolion(2006)] C. Wolf and J. Jolion. Object count/area graphs for the evaluation of object detection and segmentation algorithms. International Journal on Document Analysis and Recognition, 8(4):280–296, 2006.
  • [Yin et al.(2014)Yin, Yin, Huang, and Hao] X. Yin, X. Yin, K. Huang, and H. Hao. Robust text detection in natural scene images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36(5):970–983, 2014.
  • [Zamberletti et al.(2014)Zamberletti, Noce, and Gallo] A. Zamberletti, L. Noce, and I. Gallo. Text localization based on fast feature pyramids and multi-resolution maximally stable extremal regions. In Proc. Workshop of ACCV, 2014.
  • [Zhang et al.(2016)Zhang, Lin, Chen, Jin, and Lin] S. Zhang, M. Lin, T. Chen, L. Jin, and L. Lin. Character proposal network for robust text extraction. In Proc. ICASSP, 2016.
  • [Zhang et al.(2015)Zhang, Shen, Yao, and Bai] Z. Zhang, W. Shen, C. Yao, and X. Bai. Symmetry-based text line detection in natural scenes. In Proc. CVPR, 2015.
  • [Zitnick and Dollár(2014)] C. L. Zitnick and P. Dollár. Edge boxes: Locating object proposals from edges. In Proc. ECCV, 2014.