Context-Free TextSpotter for Real-Time and Mobile End-to-End Text Detection and Recognition

06/10/2021
by   Ryota Yoshihashi, et al.
yahoo
9

In the deployment of scene-text spotting systems on mobile platforms, lightweight models with low computation are preferable. In concept, end-to-end (E2E) text spotting is suitable for such purposes because it performs text detection and recognition in a single model. However, current state-of-the-art E2E methods rely on heavy feature extractors, recurrent sequence modellings, and complex shape aligners to pursue accuracy, which means their computations are still heavy. We explore the opposite direction: How far can we go without bells and whistles in E2E text spotting? To this end, we propose a text-spotting method that consists of simple convolutions and a few post-processes, named Context-Free TextSpotter. Experiments using standard benchmarks show that Context-Free TextSpotter achieves real-time text spotting on a GPU with only three million parameters, which is the smallest and fastest among existing deep text spotters, with an acceptable transcription quality degradation compared to heavier ones. Further, we demonstrate that our text spotter can run on a smartphone with affordable latency, which is valuable for building stand-alone OCR applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 12

01/05/2018

FOTS: Fast Oriented Text Spotting with a Unified Network

Incidental scene text spotting is considered one of the most difficult a...
11/21/2016

TextBoxes: A Fast Text Detector with a Single Deep Neural Network

This paper presents an end-to-end trainable fast scene text detector, na...
10/31/2020

Real-Time Text Detection and Recognition

Inrecentyears,ConvolutionalNeuralNet-work(CNN) is quite a popular topic,...
11/21/2018

A Novel Integrated Framework for Learning both Text Detection and Recognition

In this paper, we propose a novel integrated framework for learning both...
07/12/2017

Learning a CNN-based End-to-End Controller for a Formula SAE Racecar

We present a set of CNN-based end-to-end models for controls of a Formul...
05/17/2021

STRIDE : Scene Text Recognition In-Device

Optical Character Recognition (OCR) systems have been widely used in var...
11/07/2020

Identifying interception possibilities for WhatsApp communication

On a daily basis, law enforcement officers struggle with suspects using ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Recognition quality vs. speed in scene text spotters, evaluated with a GPU on

ICDAR2015 incidental scene text benchmark with Strong lexicon.

The proposed Context-Free TextSpotter runs faster than existing ones with similar accuracy, and enables near-real-time text spotting with a certain accuracy degradation.

Scene text spotting is a fundamental task with a variety of applications such as image-base translation, life logging, or industrial automation. To make the useful applications more conveniently accessible, deploying text-spotting systems on mobile devices has shown promise, as we can see in the recent spread of smartphones. If text spotting runs client-side on mobile devices, users can enjoy the advantages of edge computing such as availability outside of the communication range, saving of packet-communication fees consumed by uploading images, and fewer concerns about privacy violation by leakage of uploaded images.

However, while recent deep text-detection and recognition methods have become highly accurate, they are now so heavy that they can not be easily deployed on mobile devices. A heavy model may mean that its inference is computationally expensive, or that it has many parameters and its file size is large, but either is unfavorable for mobile users. Mobile CPUs and GPUs are generally powerless compared to the ones on servers, so the inference latency of slow models may become intolerable. The large file sizes of models are also problematic, because they consume a large amount of memory and traffic, both of which are typically limited on smartphones. For example, the Google Play app store has limited the maximum app size to 100MB so that developers are cognizant of this problem, but apps that are equipped with large deep recognition models may easily exceed the limit without careful compression.

The abovementioned problems motivated us to develop a lightweight text spotting method for mobiles. For this purpose, the recently popular end-to-end (E2E) text spotting shows promise. E2E recognition, which performs joint learning of modules (e.g., text detection and recognition) on a shared feature representation, has been increasingly used indeep learning to enhance the efficiency and effectiveness of models, and already much research effort has been dedicated to E2E text spotters [4, 32, 20, 28, 29]. Nevertheless, most of the existing E2E text spotters are not speed-oriented; they typically run around five to eight frames per second (FPS) on high-end GPUs, which is not feasible for weaker mobile computation power. The reasons for this heaviness lies in the complex design of modern text spotters. Typically, a deep text spotter consists of 1) backbone networks as a feature extractor, 2) text-detection heads with RoI pooling, and 3) text-recognition heads. Each of these components has potential bottlenecks of computation; existing text spotters 1) adopt heavy backbones such as VGG-16 or ResNet50 [4], 2) often utilize RoI transformers customized for text recognition to geometrically align curved texts [32, 33], and 3) the text-recognition head, which runs per text box, has unshared convolution and recurrent layers, which are necessary for the sequence modelling of word spells [32, 20].

Then, how can we make an E2E text spotter light enough for mobile usage? We argue that the answer is an E2E text spotter with as few additional modules to convolution as possible. Since convolution is the simplest building block of deep visual systems and the de facto standard operation for which well-optimized kernels are offered in many environments, it has an intrinsic advantage in efficiency. Here, we propose Context-Free TextSpotter, an E2E text spotting method without bells and whistles. It simply consists of lightweight convolution layers and a few post-processes to extract text polygons and character labels from the convolutional features. This text spotter is context-free in the sense that it does not rely on the linguistic context conventionally provided by per-word-box sequence modelling through LSTMs, or the spatial context provided by geometric RoI transformations and geometry-aware pooling modules. These simplifications may not seem straightforward, as text spotters usually needs sequence-to-sequence mapping to predict arbitrary-length texts from visual features. Our idea to alleviate this complexity is to decompose a text spotter into a character spotter and a text-box detector that work in parallel. In our framework, a character spotting module spots points in convolutional features that are readable as a character and thenclassifies the point features to give character labels to the points. Later text boxes and point-wise character labels are merged by a simple rule to construct word-recognition results. Intuitively, characters are less curved in scenes than whole words are, and character recognition is easier to tackle without geometric operations. Sequence modelling becomes unnecessary in character classification, in contrast to the box-to-word recognition utilized

in other methods. Further, by introducing weakly supervised learning,

our method does not need character-level annotations of real images to train the character-spotting part in our method.

In experiments, we found that Context-Free TextSpotter worked surprisingly well for its simplicity on standard benchmarks of scene text spotting. It achieved a word-spotting Hmean of 84 with 25 FPS on the ICDAR 2013focused text dataset, and an Hmean of 74 with 12 FPS on the ICDAR2015 incidental text dataset in the Strong-lexicon protocol. Compared to existing text spotters, it is around three times faster than typical recent text spotters, while its recognition degradation is around five to ten percent-points. Another advantage is the flexibility of the model: it can control the accuracy-speed balance simply by scaling input-image resolution within a single trained model, as shown in Fig. 1. Context-Free TextSpotter ran the fastest among deep text spotters that reported their inference speed, and thus it is useful to extrapolate the current accuracy-speed trade-off curve into untrodden speed-oriented areas. Finally, we demonstrate that our text spotter can run on a smartphone with affordable latency, which is valuable for building stand-alone OCR applications.

Our contributions summarize as follows: 1) we design a novel simple text-spotting framework called Context-Free TextSpotter. 2) We develop techniques useful for enhancing efficiency and effectiveness of our text spotter, including the linear point-wise character decoding and the hybrid approach for character spotting, which are described in Sec. 3. 3) In experiments, we confirm that our text spotter runs the fastest among existing deep text spotters, and able to be deploy in iPhone to run with acceptable latency.

2 Related work

Figure 2: Three typical text-spotting pipelines. a) Non-E2E separate text detectors and recognizers, b) two-stage E2E text spotters, and c) single-shot text spotters. We use c) for the highest parallelizability.

2.1 Text detection and recognition

Conventionally, text detection and recognition are regarded as related but separate tasks, and most studies have focused on either one or the other. In text recognition, while classical methods use character classifiers in a sliding-window manner [48], more modern plain methods consist of convolutional feature extractors and recurrent-network-based text decoders [42]

. Among recurrent architecture, bidirectional long short-term memories (BiLSTM 

[14]) is the most popular choice. More recently, self-attention-based transformers, which have been successful in NLP, are intensively examined [54]. Connectionist temporal classification (CTC) [13] is used as an objective function in sequence prediction, and later attention-based text decoding has been intensively researched as a more powerful alternative [6]. For not-well-cropped or curved texts, learning-based image rectification has been proposed [43]. These techniques are also useful in text spotting to design recognition branches.

Deep text detectors are roughly categorized into two types: box-based detectors and segment-based ones. Box-based detectors typicallyestimate text box coordinates by regression from learned features. Such techniques have been intensively studied in object detection and they are easily generalized to text detection. However, the box-based detectors often have difficulty in localizing curved texts accurately, unless any curve handling mechanism is adopted. TextBoxes [30] is directly inspired by an object detector SSD [31], which regress rectangles from feature maps. SSTD [19] exploits text attention modules that are supervised by text-mask annotations. EAST [52] treats oriented texts by utilizing regression for box angles. CTPN [44] and RTN [53] similarly utilize strip-shaped boxes and connect them to represent curved text regions.

Segment-based detectors estimate foreground masks in a pixel-labeling manner, and extract their connected components as text instances. While they are naturally able to handle oriented and curved texts, they sometimes mis-connect multiple text lines when the lines are close together. For example, fully convolutional networks [34] can be applied to text detection with text/non-text mask prediction and some post processing of center-line detection and orientation estimation [51]. PixelLink [11] enhances the separability of text masks by introducing 8-neighbor pixel-wise connectivity prediction in addition to text/non-text masks. TextSnake [35] adopts a disk-chain representation that is predicted by text-line masks and radius maps. CRAFT [2] enhances learning effectiveness by modeling character-region awareness and between-character affinity using a sophisticated supervisory-mask generation scheme, which we also adopt in our method.

2.2 End-to-end text recognition

End-to-end (E2E) learning refers to a class of learning methods that can link inputs to outputs with single learnable pipelines. It has been successfully applied for text recognition and detection. While the earliest method used a sliding-window character detector and word construction through a pictorial structure [46], most of the modern E2E text recognizers have been based on deep CNNs. The majority of existing E2E text spotters are based on two-stage framework (Fig. 2 b). Deep TextSpotter [4] is the first deep E2E text spotter, that utilizes YOLOv2 as a region-proposal network (RPN) and cascades recognition branches of convolution + CTC after it. FOTS [32] adopts a similar framework of RPN and recognition branches, but improved the oriented text recognition by adding an RoI Rotate operation. Mask TextSpotter [36, 28, 29] has an instance-mask-based recognition head, that does not rely on CTC. The recently proposed CRAFTS [3] is a segment-based E2E text spotter featuring enhanced recognition by sharing character attention with a CRAFT-based detection branch.

Single-shot text spotting (Fig. 2 c), in contrast, has not been extensively researched up to now. CharNet [50] utilizes parallel text-box detection and character semantic segmentation. Despite of its conceptual simplicity, it is not faster than two-stage spotters due to its heavy backbone and dense character labeling. MANGO [39] exploits dense mask-based attention instead to eliminate the necessity for RoI operation. However, its character decoder relies on attention-based BiLSTM, and iterative inference is still needed, which might create a computational bottleneck in long texts. Here, our contribution is to show that a single-shot spotter actually runs faster than existing two-stage ones if proper simplification is done.

2.3 Real-time image recognition

The large computation cost of deep nets is an issue in many fields, and various studies have explored general techniques for reducing computation cost, while others focus on designing lightweight models for a specific tasks. Some of examples of the former are pruning [18], quantization [9, 23], and lighter operators [17, 5]. While these are applicable to general models, it is not clear whether the loss of computational precision they cause is within a tolerable range in text localization, which require high exactness for correct reading. For the latter, lightweight models are seen in many vision tasks such as image classification, object detection, and segmentation. However, lightweight text spotting is much less studied, seemingly due to the complexity of the task, although there has been some research on mobile text detection [12, 10, 8, 24]. We are aware of only one previous work: Light TextSpotter [15], which is a slimmed down mask-based text spotter that features distillation and a ShuffleNet backbone. Although it is 40%-lighter than Mask TextSpotter, its inference speed is not as fast as ours due to its big mask-based recognition branch.

3 Method

Figure 3: Overview of proposed Context-Free TextSpotter.

The design objective behind the proposed Context-Free Text Spotter is to pursue minimalism in text spotting. As a principle, deep E2E text spotters need to be able to detect text boxes in images and recognize their contents, while the detector and recognizer can share feature representations they exploit. Thus, a text spotter, at minimum, needs to have a feature extractor, a text-box detector, and a text recognizer.

With text spotters, the text recognizers tend to become the most complex part of the system, as text recognition from detected boxes needs to perform a sequence-to-sequence prediction, that is to relates an arbitrary-length feature sequence to an arbitrary-length text. Also, the recognizer’s computation depends on the detector’s output, which makes the pipeline less parallelizable (Fig. 2 b). We break down the text recognizer into a character spotter and classifier, where the character spotter pinpoints coordinates of characters within uncropped feature maps of whole images, and the character classifier classifies each of the spotted characters regardless of other characters around it or the text box to which it belongs (Fig. 2 c).

The overview of Context-Free Text Spotter is shown in Fig. 3. It consists of 1) a U-Net-based feature extractor, 2) heat-map-based character and text-box detectors, and 3) a character decoder called the point-wise linear decoder, which is specially tailored for our method. The later part of this section describes the details of each module.

3.1 Text-box detector and character spotter

To take advantage of shape flexibility and interpretability, we adopt segment-based methods for the character and text-box detection. More specifically, we roughly follow the CRAFT [2] text detector in the text-box localization procedure. Briefly, our text-box detector generates two heat maps: a region map and an affinity map. The region map is trained to activate strongly around the centers of characters, and the affinity map is trained to activate in areas between characters within single text boxes. In inference, connected components in the sum of the region map and affinity map are extracted as text instances.

For character spotting, we reuse the region map of the CRAFT-based text detector, and thus we do not need to add extra modules to our network. Further, we adopt point-based character spotting, instead of the more common box- or polygon-based character detection, to eliminate the necessity for box/polygon regression and RoI pooling operations.

To select character points from the region map, we prefer simple image-processing-based techniques to learning-based ones in order to avoid unnecessary overheads. We consider two approaches: a labelling-based approach that performs grouping of heat maps, and a peak-detection based approach that picks up local maxima of the heat map. The former is adopted for spotting large characters, and the latter for spotting small characters. Here, our insight helpful to the point selector design is that the form of heat maps’ activation to characters largely differs by the scales of the characters in input images. Examples of heat-map activation to large and small characters are shown in Fig. 4. For small characters, labels tend to fail to disconnect close characters, while they cause less duplicated detection of single characters. Apparently, such tendency comes from the difference of activation patterns in region heat maps; the heat-map activation to large texts reflects detailed shapes of characters that may cause multiple peaks in them, while that to small texts results in simpler single-peaked blobs.

For labelling-based spotting for large characters, we use connected-component analysis [49]

to link separate characters. First we binarize region heat maps by a threshold, then link the foregrounds, and finally pick up centroids of each connected component. For peak-based spotting for small objects, we use local maxima of region heat maps, namely

(1)

where denotes the region heat map and denotes 8-neighbors in the 2-D coordinate system. Later, the extracted points from the labelling-based spotter are used in character decoding if they are included in large text boxes, and the ones from peak-based spotting are used if included in small text boxes, where large and small are defined by whether the shorter side of the box is longer or shorter than a certain threshold.

Figure 4: Examples of small and large texts. Corresponding region heat maps (second row) show different characteristics that lead labeling-based approaches (third row) to failure (red boxes) in small texts, and peak-detection-based approaches (last row) to failure in large texts. Thus, we adopt a hybrid approach in character spotting.

3.2 Character decoder

Our character decoder is called the linear point-wise decoder

, which simply performs linear classification of feature vectors on detected character points. Given a set of points

, a feature map with channels and size , and the number of classes (i.e., types of characters) , the decoder is denoted as

cls

where denotes the index operation that extracts feature vectors at the points and stacks them into an

-shaped matrix. The parameters of linear transformation

are parts of the learnable parameters of the model, and are optimized by the back propagation during training jointly to the other network parameters, where is an matrix and is a -dimensional vector broadcast by with length

. Finally, row-wise softmax is taken along the channel axis to form classification probabilities. Then

cls becomes an -shaped matrix, where its element at encodes the probability that the -th points is recognized as the -th character.

After giving character probability to each point, we further filter out the points that are not confidently classified by applying a threshold. Finally, each character point is assigned to the text box that include it, and for each text box, the character points that are assigned to it are sorted by x coordinates and read left-to-right.

It is worth noting that the linear point-wise decoding is numerically equivalent to semantic-segmentation-style character decoding, except that it is computed sparsely. By regarding the classification weights and biases in Eq. 3.2 as a convolution layer, it can be applied densely to the feature map, as shown in Fig. 5 a. Despite of its implementation simplicity,

it suffers from its heavy output tensor. The output has a

-sized label space, where is the desired output resolution and #{classes} is the number of types of characters we want to recognize. In text recognition, #{classes} is already large in the Latin script, which has over 94 alphanumerics and symbols in total, and may be even larger in other scripts (for example, there are over 2,000 types of Chinese characters). In practice, dense labeling of 94 characters from a -sized feature map consumes around 60 MB memory and 0.48 G floating-point operations only by the output layer, regardless of how many letters are present in the image. If given 2,000 characters, the output layer needs 1.2 GB memory, which critically limits the training and inference efficiency. Thus, we prefer the point-wise alternative (Fig. 5 b) to maintain scalability and to avoid unnecessary computation when there are few letters in images.

Figure 5: Comparison between semantic-segmentation-based character decoding and our linear point-wise decoding. The linear point-wise decoding has advantages in space and time complexity when #{points} is small and #{classes} is large.

3.3 Feature extractor

Most scene text spotters use VGG16 or ResNet50 as their backbone feature extractors, which are heavy for mobiles. Our choice for lightweight design is CSP-PeleeNet [45], which was originally designed for mobile object detection [47], and further lightened by adding cross-stage partial connection [45]

without reducing ImageNet accuracy. The network follows

the DenseNet [22] architecture but is substantially smaller thanks to having 32, 128, 256, 704 channels in its 1/1-, 1/4-, 1/8-, 1/16-scaled stage each, and a total of 21 stacked dense blocks.

On the CSPPeleeNet backbone, we construct U-Net [40] by adding upsampling modules for feature decoding. We put the output layer of the U-Net at the 1/4-scaled stage, which corresponds to the second block in the CSPPeleeNet. 111Note that Fig. 3 does not precisely reflect the structure due to layout constraints.

3.4 Training

Context-Free TextSpotter is end-to-end differentiable and trainable with ordinary gradient-based solvers. The training objective is defined as

(3)
(4)
(5)

where denotes feature-map size, denote the predicted region and affinity map and their corresponding ground truths, and denotes character-level ground truths that indicate locations and classes of the characters. CE denotes the cross-entropy classification loss, and

is a hyperparameter that balances the two

losses.

This training objective needs character-level annotations, but many scene text recognition benchmarks only provide word-level ones. Therefore, we adopt weakly-supervised learning that exploits approximated character-level labels in the earlier training stage, and updates the labels by self-labeling in the later stage. In the earlier stage, for stability, we fix to the centers of approximated character polygons. Here, the approximated character polygons are calculated by equally dividing the word polygons into the word-length parts along their word lines. The ground truths of the region and affinity maps are generated using the same approximated character polygons following the same manner as the CRAFT [2] detector.

In the later stage, is updated by self-labeling: the word regions of training images are cropped by ground-truth polygon annotation, the network under training predicts heat maps from the cropped images, and spotted points in them are used as new combined with the ground-truth word transcriptions. Word instances whose number of spotted points and length of the ground-truth transcription are not equal are discarded because the difference suggests inaccurate self-labeling. We also exploit synthetic text images with character-level annotation by setting their at the character-polygon centers.

4 Experiments

To evaluate the effectiveness and efficiency of Context-Free TextSpotter, we conducted intensive experiments using scene-text recognition datasets. Further, we analyzed factors that control quality and speed, namely, choice of backbones and input-image resolution, and conducted ablative analyses of the modules. Finally, we deployed Context-Free TextSpotter as an iPhone application and measured the on-device inference speed to demonstrate the feasibility of mobile text spotting.

Datasets

We used ICDAR2013 [26] and ICDAR2015 [25] dataset for evaluation. ICDAR2013 thematizes focused scene texts and consists of 229 training and 233 testing images. ICDAR2015 thematizes incidental scene texts and consists of 1,000 training and 500 testing images. In the evaluation, we followed the competition’s official protocol. For detection, we used DetEval for ICDAR2013 and IoU with threshold 0.5 for ICDAR2015

, and calculated precision, recall, and their harmonic means (H means).

For recognition, we used end-to-end and word spotting protocol, with provided Strong (100 words), Weak (1,000 words), and Generic (90,000 words) lexicons.

For training, we additionally used the SynthText [16] and ICDAR2019 MLT [37] datasets to supplement the relatively small training sets of ICDAR2013/2015. SynthText is a synthetic text dataset that provides 800K images and character-level annotations by polygons. ICDAR2019 MLT is a multi-lingual dataset with 10,000 training images. We replaced non-Latin scripts with ignore symbols so that we could use the dataset to train a Latin text spotter.

Method Detection Word spotting End-to-end Prms FPS
P R H S W G S W G (M)
Standard models
EAST [52] 83.5 73.4 78.2 13.2
Seglink [41] 73.1 76.8 75.0 8.9
CRAFT [2] 89.8 84.3 86.9 20* 5.1*
StradVision [25] 45.8 43.7
Deep TS [4] 58 53 51 54 51 47 9
Mask TS [20] 91.6 81.0 86.0 79.3 74.5 64.2 79.3 73.0 62.4 87* 2.6
FOTS [32] 91.0 85.1 87.9 84.6 79.3 63.2 81.0 75.9 60.8 34 7.5
Parallel Det-Rec [27] 83.7 96.1 89.5 89.0 84.4 68.8 85.3 80.6 65.8 3.7
CharNet [50] 89.9 91.9 90.9 83.1 79.1 69.1 89 0.9*
CRAFTS [3] 89.0 85.3 87.1 83.1 82.1 74.9 5.4
Lightweight models
PeleeText [8] 85.1 72.3 78.2 10.3 11
PeleeText++ [7] 81.6 78.2 79.9 7.0 15
PeleeText++ MS [7] 87.5 76.6 81.7 7.0 3.6
Mask TS mini [20] 71.6 63.9 51.6 71.3 62.5 50.0 87* 6.9
FOTS RT [32] 85.9 79.8 82.7 76.7 69.2 53.5 73.4 66.3 51.4 28 10*
Light TS [15] 94.5 70.7 80.0 77.2 70.9 65.2 34* 4.8
Context-Free (ours) 88.4 77.1 82.9 74.3 67.1 54.6 70.2 63.4 52.4 3.1 12
Table 1: Text detection and recognition results on ICDAR2015. Bold indicates the best performance and underline indicates the second best within lightweight text-spotting models. * indicates our re-measured or re-calculated numbers, while the others are excerpted from the literature.
Method Detection Word spotting End-to-end Params FPS
P R H S W G S W G (M)
Standard models
EAST [52] 92.6 82.6 87.3 13.2
Seglink [41] 87.7 83.0 85.3 20.6
CRAFT [2] 97.4 93.1 95.2 20* 10.6*
StradVision [25] 85.8 82.8 70.1 81.2 78.5 67.1
Deep TS [4] 92 89 81 89 86 77 10
Mask TS [20] 91.6 81.0 86.0 92.5 92.0 88.2 92.2 91.1 86.5 87* 4.8
FOTS [32] 88.3 92.7 90.7 83.5 88.8 87.1 80.8 34 13*
Parallel Det-Rec [27] 95.0 93.7 88.7 90.2 88.9 84.5
CRAFTS [3] 96.1 90.9 93.4 94.2 93.8 92.2 8.3
Lightweight models
MobText [10] 88.3 66.6 76.0 9.2
PeleeText [8] 80.1 79.8 80.0 10.3 18
PeleeText++ [7] 87.0 73.5 79.7 7.0 23
PeleeText++ MS [7] 92.4 80.0 85.7 7.0 3.6
Context-Free (ours) 85.1 83.4 84.2 83.9 80.0 69.1 80.1 76.4 67.1 3.1 25
Table 2: Text detection and recognition results on ICDAR2013.

Implementation details

Character-recognition feature channels was set to 256. Input images were resized into 2,880 pixels for ICDAR2015 and 1,280 pixels for ICDAR2013 in the longer side keeping the aspect ratio. For training, we used Adam with initial learning rate 0.001, recognition-loss weight 0.01, and the batch size of five, where three are from real the and the rest are from SynthText training images. For evaluation, we used weighted edit distance [36]

for lexicon matching. Our implementation was based on PyTorch-1.2.0 and ran on a virtual machine with an Intel Xeon Silver 4114 CPU, 16GB RAM, and an NVIDIA Tesla V100 (VRAM16GB

2) GPU. All run times were measured with the batch size of one.

4.1 Results

Figure 6: Text spotting results with Context-Free TextSpotter on ICDAR2013 and 2015. The lexicons are not used in this visualization. Red characters indicate wrong recognition and purple missed ones. Best viewed zoomed in digital.

Table 1 summarizes the detection and recognition results on ICDAR2015. Context-Free TextSpotter achieved an 82.9 detection H mean and 74.3 word-spotting H mean with the Strong lexicon with 3.8 million parameters and 12 FPS on a GPU. The inference speed and model size of Context-Free was the best among all compared text spotters.

For a detailed comparison, we separated the text spotters into two types: standard models and lightweight models. While it is difficult

to introduce such separation due to the continuous nature of the size-performance balance in neural networks, we

categorized 1) models that are claimed to be lightweight in their papers and 2) smaller or faster versions reported in papers whose focus were creating standard-sized models. The PeleeText [8, 7] family is detection-only models based on Pelee [47], but the model sizes are larger than ours due to feature-pyramid-based bounding box regression. Mask TextSpotter mini (Mask TS mini) and FOTS Real Time (FOTS RT) are smaller versions reported in the corresponding literature, and are acquired by scaling input image size. However, their model sizes are large due to the full-size backbones, while the accuracies were

comparable to ours. This suggests that simple minification of standard models is not optimal in repurposing for mobile or fast inference. On another note, FOTS RT was originally reported to run in 22.6 FPS in a customized Caffe with a TITAN-Xp GPU. Since it is closed-source, we re-measured a 3rd-party reimplementation in PyTorch

222https://github.com/jiangxiluning/FOTS.PyTorch, and the FPS was 10 when the batch size was one. We reported the latter number in the table in order to prioritize comparisons in the same environment. The other methods that we re-measured showed faster speed than in their original reports, seemingly due to our newer GPU. Light TextSpotter, which is based on lightened Mask TextSpotter, had an advantage in the evaluation with the Generic lexicon, but is much heavier than ours due to the instance-mask branches in the network.

Table 2 summarizes the detection and recognition results on ICDAR2013. ICDAR2013 contains focused texts with relatively large and clear appearance, which enables good recognition accuracy in smaller test-time input-image sizes than in ICDAR2015. While all text spotters ran faster, Context-Free was again the fastest among them. Light TextSpotter and the smaller versions of FOTS and Mask TextSpotter were not experimented in ICDAR2013 and thus are omitted here. This means we have no lightweight competitors in text spotting, but ours outperformed the lightweight text detectors while our model size was smaller and our inference was faster than theirs.

To analyze the effect of input-image sizes on inference, we collected the results with different input sizes and visualize them in Fig. 1

. Here, input images were resized using bilinear interpolation keeping their aspect ratios before inference. We

used a single model trained with cropped images in size without retraining. The best results were obtained when the longer sides were 2,880 pixels, where the speed was 12 FPS. We also obtained the near-real-time speed of 26 FPS with inputs 1,600 pixels long, even on the more difficult ICDAR2015 dataset with 62.8 H mean.

Table 3 summarizes the ablative analyses by removing the modules and techniques we adopted in the final model. Our observations are four-fold: First, real-image training is critical (a vs. b1). Second, end-to-end training itself did not much improve detection (b1 vs. b2), but removing boxes that were not validly read as texts (i.e., ones transcribed as empty strings by our recognizer) significantly boosted the detection score (c1). Third, the combination of peak- and labeling-based character spotting (called hybrid) was useful (c1–3). Fourth, weighted edit distance [28] and adding MLT training data [27], the known techniques in the literature, were also helpful for our model (d and e). Table 4 shows comparisons of the different backbones in implementing our method. We tested MobileNetV3-Large [21] and GhostNet [17] as recent fast backbones, and confirmed that CSPPeleeNet was the best among them.

Finally, as a potential drawback of context-free text recognition, we found that duplicated reading of a single characters and minor spelling errors sometimes appeared in outputs uncorrected, as seen in examples in Fig. 6 and the recognition metrics with the Generic lexicon. These may become a problem more when the target lexicon is large. Thus, we investigated the effect of lexicon sizes on text-spotting accuracy. We used Weak as a basic lexicon and inflated it by adding randomly sampled words from Generic. The trials were done five times and we plotted the average. The results are shown in Fig. 7, which shows the recognition degradation was relatively gentle when the lexicon size is smaller than 10,000. This suggests that Context-Free TextSpotter is to some extent robust against large vocabulary, despite it lacks language modelling.

Training E2E Removing Character Lexicon Det H WS S
unreadables spotting matching
a) Synth 64.8
b1) + IC15 79.9
b2) + IC15 Peak Edit dist. 80.2
c1) + IC15 Peak Edit dist. 81.0 70.5
c2) + IC15 Label Edit dist. 78.4 48.4
c3) + IC15 Hybrid Edit dist. 81.0 71.3
d) + IC15 Hybrid Weighted ED 81.0 72.8
e) + IC15 & 19 Hybrid Weighted ED 82.9 74.3
Table 3: Ablative analyses of modules and techniques we adopted.
Table 4. Comparisons of lightweight backbones in our framework. Backbone IC13 IC15 FPS MobileNetV3 67.1 50.1 11 GhostNet 74.6 63.3 9.7 CSPPeleeNet 83.9 74.3 12
Figure 7: Lexicon size vs. recognition H mean.
Figure 8: On-device Benchmarking with iPhone 11 Pro.

4.2 On-device Benchmarking

To confirm the feasibility of deploying our text spotter on mobiles, we conducted on-device benchmarking. We used an iPhone 11 Pro with the Apple A13 CPU and 4GB memory, and ported our PyTorch model to the CoreML framework to run on it. We also implemented Swift-based post-processing (i.e., connected-component analysis and peak detection) to include their computation times in the benchmarking. With our ICDAR2013 setting (input size ), the average inference latency was 399 msec with a CPU, and 51 msec with a Neural Engine [1] hardware accelerator. Usability studies suggest that latency within 100 msec makes“the user feel that the system is reacting instantaneously”, and 1,000 msec is “the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay” [38]. Our text spotter achieves the former if an accelerator is given, and the latter on a mobile CPU. More latencies with different input sizes are shown in Fig. 8.

We also tried to port existing text spotters. MaskTextSpotters [28, 29] could not be ported to CoreML, due to their customized layers. While implementing the special text-spotting layers in the CoreML side would solve this, we also can say that simpler convolution-only models, like ours, have an advantage in portability. Another convolution-only model, CharNet [50] ran on iPhone but fairly slowly. With NeuralEngine, it took around 1.0 second to process 1,280-pixel-sized images, and could not process larger ones due to memory limits. With the CPU, it took 8.1 seconds for 1,280-pixel images, 18.2 seconds for 1,920, and ran out of memory for larger sizes.

5 Conclusion

In this work, we have proposed Context-Free TextSpotter, an end-to-end text spotting method without bells and whistles that enables real-time and mobile text spotting. We hope our method inspires the developers for OCR applications, and will serve as a stepping stone for researchers who want to prototype their ideas quickly.

Acknowledgements

We would like to thank Katsushi Yamashita, Daeju Kim, and members of the AI Strategy Office in SoftBank for helpful discussion.

References

  • [1] Apple Developer Documentation: MLComputeUnits. https://developer.apple.com/
    documentation/coreml/mlcomputeunits, accessed: 2021-01-28
  • [2] Baek, Y., Lee, B., Han, D., Yun, S., Lee, H.: Character region awareness for text detection. In: CVPR. pp. 9365–9374 (2019)
  • [3] Baek, Y., Shin, S., Baek, J., Park, S., Lee, J., Nam, D., Lee, H.: Character region attention for text spotting. In: ECCV. pp. 504–521 (2020)
  • [4] Busta, M., Neumann, L., Matas, J.: Deep TextSpotter: An end-to-end trainable scene text localization and recognition framework. In: ICCV. pp. 2204–2212 (2017)
  • [5] Chen, H., Wang, Y., Xu, C., Shi, B., Xu, C., Tian, Q., Xu, C.: AdderNet: Do we really need multiplications in deep learning? In: CVPR. pp. 1468–1477 (2020)
  • [6] Cheng, Z., Bai, F., Xu, Y., Zheng, G., Pu, S., Zhou, S.: Focusing attention: Towards accurate text recognition in natural images. In: ICCV. pp. 5076–5084 (2017)
  • [7] Córdova, M., Pinto, A., Pedrini, H., Torres, R.d.S.: Pelee-Text++: A tiny neural network for scene text detection. IEEE Access (2020)
  • [8]

    Córdova, M.A., Decker, L.G., Flores-Campana, J.L., dos Santos, A.A., Conceição, J.S.: Pelee-Text: A tiny convolutional neural network for multi-oriented scene text detection. In: ICMLA. pp. 400–405 (2019)

  • [9] Courbariaux, M., Bengio, Y., David, J.P.: BinaryConnect: Training deep neural networks with binary weights during propagations. In: NeurIPS (2015)
  • [10] Decker1a, L.G.L., Pinto, A., Campana, J.L.F., Neira, M.C., dos Santos, A.A., Conceição, J.S., Angeloni, M.A., Li, L.T., et al.: Mobtext: A compact method for scene text localization. In: VISAPP (2020)
  • [11] Deng, D., Liu, H., Li, X., Cai, D.: PixelLink: Detecting scene text via instance segmentation. In: AAAI. vol. 32 (2018)
  • [12] Fu, K., Sun, L., Kang, X., Ren, F.: Text detection for natural scene based on mobilenet v2 and u-net. In: ICMA. pp. 1560–1564 (2019)
  • [13]

    Graves, A., Fernández, S., Gomez, F., Schmidhuber, J.: Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: NeurIPS. pp. 369–376 (2006)

  • [14] Graves, A., Fernández, S., Schmidhuber, J.: Bidirectional LSTM networks for improved phoneme classification and recognition. In: International Conference on Artificial Neural Networks (ICANN). pp. 799–804. Springer (2005)
  • [15] Guan, J., Zhu, A.: Light TextSpotter: An extreme light scene text spotter. In: International Conference on Neural Information Processing (ICONIP). pp. 434–441. Springer (2020)
  • [16] Gupta, A., Vedaldi, A., Zisserman, A.: Synthetic data for text localisation in natural images. In: CVPR. pp. 2315–2324 (2016)
  • [17] Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., Xu, C.: GhostNet: More features from cheap operations. In: CVPR. pp. 1580–1589 (2020)
  • [18] Han, S., Pool, J., Tran, J., Dally, W.J.: Learning both weights and connections for efficient neural networks. In: NeurIPS (2015)
  • [19] He, P., Huang, W., He, T., Zhu, Q., Qiao, Y., Li, X.: Single shot text detector with regional attention. In: ICCV. pp. 3047–3055 (2017)
  • [20] He, T., Tian, Z., Huang, W., Shen, C., Qiao, Y., Sun, C.: An end-to-end TextSpotter with explicit alignment and attention. In: CVPR. pp. 5020–5029 (2018)
  • [21] Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al.: Searching for MobileNetV3. In: ICCV (2019)
  • [22] Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR. pp. 4700–4708 (2017)
  • [23]

    Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research

    18(1), 6869–6898 (2017)
  • [24] Jeon, M., Jeong, Y.S.: Compact and accurate scene text detector. Applied Sciences 10(6),  2096 (2020)
  • [25]

    Karatzas, D., Gomez-Bigorda, L., Nicolaou, A., Ghosh, S., Bagdanov, A., Iwamura, M., Matas, J., Neumann, L., Chandrasekhar, V.R., Lu, S., et al.: ICDAR 2015 competition on robust reading. In: ICDAR. pp. 1156–1160 (2015)

  • [26] Karatzas, D., Shafait, F., Uchida, S., Iwamura, M., i Bigorda, L.G., Mestre, S.R., Mas, J., Mota, D.F., Almazan, J.A., De Las Heras, L.P.: ICDAR 2013 robust reading competition. In: ICDAR. pp. 1484–1493 (2013)
  • [27] Li, J., Zhou, Z., Su, Z., Huang, S., Jin, L.: A new parallel detection-recognition approach for end-to-end scene text extraction. In: ICDAR. pp. 1358–1365 (2019)
  • [28] Liao, M., Lyu, P., He, M., Yao, C., Wu, W., Bai, X.: Mask TextSpotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. PAMI (2019)
  • [29] Liao, M., Pang, G., Huang, J., Hassner, T., Bai, X.: Mask TextSpotter v3: Segmentation proposal network for robust scene text spotting. In: ECCV (2020)
  • [30] Liao, M., Shi, B., Bai, X.: TextBoxes++: A single-shot oriented scene text detector. Transactions on Image Processing 27(8), 3676–3690 (2018)
  • [31] Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: Single shot multibox detector. In: ECCV. pp. 21–37. Springer (2016)
  • [32] Liu, X., Liang, D., Yan, S., Chen, D., Qiao, Y., Yan, J.: FOTS: Fast oriented text spotting with a unified network. In: CVPR. pp. 5676–5685 (2018)
  • [33] Liu, Y., Chen, H., Shen, C., He, T., Jin, L., Wang, L.: ABCNet: Real-time scene text spotting with adaptive bezier-curve network. In: CVPR. pp. 9809–9818 (2020)
  • [34] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR. pp. 3431–3440 (2015)
  • [35] Long, S., Ruan, J., Zhang, W., He, X., Wu, W., Yao, C.: TextSnake: A flexible representation for detecting text of arbitrary shapes. In: ECCV. pp. 20–36 (2018)
  • [36] Lyu, P., Liao, M., Yao, C., Wu, W., Bai, X.: Mask TextSpotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. In: ECCV. pp. 67–83 (2018)
  • [37] Nayef, N., Patel, Y., Busta, M., Chowdhury, P.N., Karatzas, D., Khlif, W., Matas, J., Pal, U., Burie, J.C., Liu, C.l., et al.: ICDAR2019 robust reading challenge on multi-lingual scene text detection and recognition?rrc-mlt-2019. In: ICDAR (2019)
  • [38] Nielsen, J.: Usability engineering. Morgan Kaufmann (1994)
  • [39] Qiao, L., Chen, Y., Cheng, Z., Xu, Y., Niu, Y., Pu, S., Wu, F.: Mango: A mask attention guided one-stage scene text spotter. In: AAAI (2021)
  • [40] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). pp. 234–241 (2015)
  • [41] Shi, B., Bai, X., Belongie, S.: Detecting oriented text in natural images by linking segments. In: CVPR. pp. 2550–2558 (2017)
  • [42] Shi, B., Bai, X., Yao, C.: An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. PAMI 39(11), 2298–2304 (2016)
  • [43] Shi, B., Wang, X., Lyu, P., Yao, C., Bai, X.: Robust scene text recognition with automatic rectification. In: CVPR. pp. 4168–4176 (2016)
  • [44] Tian, Z., Huang, W., He, T., He, P., Qiao, Y.: Detecting text in natural image with connectionist text proposal network. In: ECCV. pp. 56–72 (2016)
  • [45]

    Wang, C.Y., Mark Liao, H.Y., Wu, Y.H., Chen, P.Y., Hsieh, J.W., Yeh, I.H.: CSPNet: A new backbone that can enhance learning capability of CNN. In: International Conference on Computer Vision and Pattern Recognition (CVPR) Workshops. pp. 390–391 (2020)

  • [46] Wang, K., Babenko, B., Belongie, S.: End-to-end scene text recognition. In: ICCV (2011)
  • [47] Wang, R.J., Li, X., Ling, C.X.: Pelee: A real-time object detection system on mobile devices. NeurIPS 31, 1963–1972 (2018)
  • [48] Wang, T., Wu, D.J., Coates, A., Ng, A.Y.: End-to-end text recognition with convolutional neural networks. In: ICPR (2012)
  • [49] Wu, K., Otoo, E., Suzuki, K.: Optimizing two-pass connected-component labeling algorithms. Pattern Analysis and Applications 12(2), 117–135 (2009)
  • [50] Xing, L., Tian, Z., Huang, W., Scott, M.R.: Convolutional character networks. In: ICCV. pp. 9126–9136 (2019)
  • [51] Zhang, Z., Zhang, C., Shen, W., Yao, C., Liu, W., Bai, X.: Multi-oriented text detection with fully convolutional networks. In: CVPR. pp. 4159–4167 (2016)
  • [52] Zhou, X., Yao, C., Wen, H., Wang, Y., Zhou, S., He, W., Liang, J.: EAST: an efficient and accurate scene text detector. In: CVPR. pp. 5551–5560 (2017)
  • [53] Zhu, X., Jiang, Y., Yang, S., Wang, X., Li, W., Fu, P., Wang, H., Luo, Z.: Deep residual text detection network for scene text. In: ICDAR. vol. 1, pp. 807–812. IEEE (2017)
  • [54] Zhu, Y., Wang, S., Huang, Z., Chen, K.: Text recognition in images based on transformer with hierarchical attention. In: ICIP. pp. 1945–1949. IEEE (2019)