Patch Aggregator for Scene Text Script Identification

12/09/2019 ∙ by Changxu Cheng, et al. ∙ Huazhong University of Science u0026 Technology NetEase, Inc 0

Script identification in the wild is of great importance in a multi-lingual robust-reading system. The scripts deriving from the same language family share a large set of characters, which makes script identification a fine-grained classification problem. Most existing methods make efforts to learn a single representation that combines the local features by making a weighted average or other clustering methods, which may reduce the discriminatory power of some important parts in each script for the interference of redundant features. In this paper, we present a novel module named Patch Aggregator (PA), which learns a more discriminative representation for script identification by taking into account the prediction scores of local patches. Specifically, we design a CNN-based method consisting of a standard CNN classifier and a PA module. Experiments demonstrate that the proposed PA module brings significant performance improvements over the baseline CNN model, achieving the state-of-the-art results on three benchmark datasets for script identification: SIW-13, CVSI 2015 and RRC-MLT 2017.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Script identification is to predict the script of a given text image, having played a more and more important role in multilingual systems nowadays. Under many circumstances, it acts as a prerequisite to decide which language model to use for further text detection or recognition.

Earlier works conducted on document [28, 13, 4, 15], handwritten text [10, 12] and video overlaid-text [7, 21, 32]

where texts hold regular layout and simple background have achieved great performance. But when it comes to scene text script identification which extends the application to more fields like scene understanding 

[5], additional challenges emerge, like the complex background, various text styles and diverse noise, etc. Our work focuses on scene text, taking on challenges as follows:

  • Some scripts have relatively subtle differences, e.g., Russian and English, which share a large set of characters. Distinguishing them is exactly a fine-grained classification problem requiring discriminative features.

  • Cropped text images have arbitrary aspect ratios, making it necessary to find an effective way to feed them into the model in the batch-based training phase.

Fig. 1: Examples of SIW-13 illustrate the importance of local discriminative parts. Characters in the red bounding box are discriminative, while those in the green one occur in several scripts. (a) The left text line only consisting of shared characters may be Chinese or Japanese, which is ambiguous. As for the right with just a kana character added, we are sure it is Japanese. (b) The left can be any type of Latin, but the right is definitely Russian due to the discriminative characters bounded by the red box.

The first challenge is crucial in script identification where the bottleneck mainly comes from the scripts of the same family sharing some common characters. Hence, local discriminative features are always paid much attention to. Almost all the works focus on collecting critical features without suppressing the redundant features acting as noise. Some works [27, 8, 30] adopted clustering on deep convolutional features for critical descriptors. There were multi-stage training process and great computation due to clustering. Inspired by Siamese network [3], Gomez et al. [9] proposed an improved patch-based method containing an ensemble of identical nets to learn discriminative stroke-part representations. Mei et al. [16]

adopted Convolutional Recurrent Neural Networks 

[24] to extract the image representation and spatial dependency which is discriminative in spite of sharing characters. Fujii et al. [6] use Encoder and Summarizer to get local features and fuse them to a single summary by attention mechanism [1] to reflect the importance of different patches. Ankan et al. [2] proposed an attention-based Convolutional-LSTM network, analyzing features globally and locally which is popular in fine-grained classification [29, 33].

Fig. 2: Overview of our proposed method. Basic features is first extracted from input . Then Global Squeezer makes a general prediction , and meanwhile Patch Aggregator mines patch-level prediction scores to construct a representation for the later prediction with dropping redundant features. Finally and are dynamically fused to output . It can be trained end-to-end in one stage.

Although having made great progress, the works above taking features of all patches into account suffer from a fatal issue that the domination of the discriminative features could be reduced by other weak-discrimination features. Especially, a text line attached to a specific script can consist of many characters belonging to the intersection of several scripts, making a model prone to suffer from redundant features. As shown in Figure 1, the text line in the left only consisting of shared characters can be either Chinese or Japanese. However, the right one with only one character added is definitely Japanese, which shows the great power of the discriminative features. But the existing works cannot make good use of the features. For example, if we average all the character patches with weights, the power of the discriminative patches will be reduced by a much larger number of shared character patches. A similar case is in Figure 1 where the few Russian-specific characters on the right are critical.

The discriminative part is expected to be dominant even if with smaller quantity. Here we propose Patch Aggregator (PA) to learn and aggregate the local features. PA makes patch-level predictions as an explicit representation, from which we can know what scripts the patches of a given image could be. After that, by simply max-pooling the predicted probability distributions, the relation between the input image and every script is obvious. This is a low-dimensional but important discriminative feature representation. Based on this, a simple linear classifier will make a local-level prediction about the whole image. For example, the right image in Figure

1 contains patches attached to two scripts, i.e., Chinese and Japanese, which form the low-dimensional discriminative features. PA will predict that it is more likely to be Japanese if both Chinese and Japanese-specific characters occur in an image. But when no Japanese-specific character occurs, as in the left in Figure 1, PA will infer it as Chinese. The process can be learned well in the training stage.

As for the problem of arbitrary aspect ratios, recent methods with good performance take densely cropped image patches with fixed size as input [8, 9, 30, 2]. They also employ data augmentation somehow, but they suffered from the following three issues. Firstly, A cropped image patch may bring noise caused by sudden breaking off. And the feature extractor cannot catch its surroundings in other patches, which limits the feature representation for losing the holistic context messages. Secondly, heavy redundancy of overlapped patches could lead to much repeated computation, pulling down the efficiency during test phase. Thirdly, the samples with larger aspect ratios in some scripts could make more cropped patches which may cause great data imbalance, disturbing the training to some degree. Hence, our input prefers full-size images to cropped patches. Shi et al. [25] designs a spatially-sensitive pooling layer by pooling horizontally on the intermediate feature map so that the width of the input image can be flexible. We adopt pooling strategy to solve the problem too, but our pooling process intends to keep more useful information and be more interpretable.

In this work, we employ an end-to-end CNN-based method consisting of a standard CNN classifier called Global Squeezer (GS) and a PA module, as shown in Figure 2. In training phase, we design a novel loss called softermax loss to take patch-level predictions under the weak supervision of the ground truth label in PA, since the label of the whole text image sometimes cannot imply the exact classes of patches for the characters sharing of some scripts. All other predictions are supervised by softmax loss. Succinctly, the main contributions of this paper are as follows:

  1. We propose PA to aggregate patch-level predictions to learn a discriminative representation, which has high interpretability. PA along with GS can process images with arbitrary aspect ratios in a simple but effective way.

  2. We design softermax loss to accomplish patch-level weak supervision on local predictions with image-level label.

  3. Experiments are conducted on three public datasets, i.e., SIW-13 [23], CVSI2015 [22] and RRC-MLT2017 [17], and achieve state-of-the-art performance.

Fig. 3: Details of our method. The full-size image is fed into the shared convolution module Conv0 at first. Conv1 and Conv2 are specially designed for the two modules. The upper module GS uses Global Average Pooling (GAP) to squeeze the features per channel for general prediction. Meanwhile, the lower module PA employs convolution and softmax to make patch-level prediction for intermediate supervision. Global Max Pooling (GMP) applied to the patch-level prediction scores of every class got before implies what potential scripts the input could belong to. Subsequently the fully connected layer (CLS2) makes fine-grained prediction. GMP and CLS2 show the inference process to mine the discriminative features and consider the context. Finally we take adaptive weighted sum of the two modules. It is supervised by 4 losses to ensure output in expectation in training phase.

Ii Methodology

Our proposed method can perform script identification simply and effectively. Convolutional operation is directly imposed on a full-size image instead of cropped patches.

An overview of our method is shown in Figure 2 and the detail is in Figure 3

. The patch mentioned here is a single pixel of specific deep feature map with a proper size of receptive field. A shared convolutional structure acts as a basic feature extractor in the framework, followed by two modules called Global Squeezer (GS) and Patch Aggregator (PA) respectively. GS aims to squeeze the holistic representation while PA makes predictions over local features and aggregates them by inference which is able to make full use of the discriminative features. Finally, we fuse them dynamically in a learnable way. The entire network can be trained end-to-end in one stage.

Ii-a Global Squeezer

Once we get the basic feature by the shared convolutional structure, Global Squeezer (GS), a common classifier, makes a global prediction. Firstly a tiny convolutional structure gets the feature map , which meets the demands of globally squeezing, including receptive field and dimension. Subsequently we squeeze (of size ) across spatial dimensions by Global Average Pooling (GAP) to get a channel-wise global descriptor , where is the number of channels. This can be described as Eq.1.

(1)

GAP squeezes holistic feature representation across each channel, since a convolutional feature channel often corresponds to a certain type of visual pattern [31]. Then the holistic representation is fed into a linear classifier to get the global prediction scores over classes.

Ii-B Patch Aggregator

It’s far from being sufficient to learn discriminative features by just making prediction from a single global perspective as GS does. Specifically, attention mechanism [1] has been widely applied to accomplish discriminative learning extensively. However, it seems not so valid as for script identification due to the effect caused by redundant features. Here we specify the novel Patch Aggregator (PA) which learns and employs the discriminative features better.

PA starts with the same tiny convolutional structure as in GS to make pixels in deep feature map of proper receptive fields which result in precise patch-level scores implemented by convolution. Then softmax function converts the scores to probability distributions (of size across the classes). This step goes under a special intermediate supervision discussed in II-D1 during training. The patch-level scores actually act as high-level semantic features where the discriminative representation can be extracted .

Taking account of the impairment caused by redundant features, we adopt Global Max Pooling (GMP) when we aggregate the prediction scores of patches to pick the most remarkable response per class, which is highly interpretable. The process can be described as Eq.2.

(2)

where is the score of the patch in position corresponding to the class. After we pick out the maximum of per class, reflects the likelihood of the given image appertaining to every class, thus we can know which scripts the components of the input image could belong to. Then a two-layer linear classifier gets the scores over classes from local perspective.

Visualization of the behaviour in the module is available in III-D.

Ii-C Fusion

To combine the outputs of the above two modules adaptively, we adopt dynamic weighted fusion. The weight of global output just depends on itself somehow, denoted as . Then the weight of is the complement . The fusion process can be shown in Eq.3 and Eq.4. Eq.3 show the mapping process and

is the sigmoid function.

and are trainable parameters of linear layers.

(3)
(4)

Ii-D Loss Functions

In the training stage, the proposed network is optimized by four losses— and as shown in Figure 3, to make sure the network can work within our expectation.

GS and PA are both under supervision with and respectively to make sure they really learn well. is devised for the final decisive output which determines the performance of the model, holding a relatively higher weight. The three losses all use softmax loss based on the ground truth labels.

The loss is designed for the intermediate supervision as have been mentioned in section II-B. Since the categories of some patches cannot simply rely on the image-level label for the characters-sharing issue, the challenge turns out that the image-level label is not sufficient to supervise patch-level scores if we directly use softmax loss. Thus we propose the novel softermax loss to deal with the problem.

Ii-D1 Softermax Loss

Classical softmax loss pushes the model to output a much greater probability on the ground truth(GT) class than others. It makes the model excessively confident in GT, which is inappropriate for patch-level prediction for scripts confusion of some characters. To relieve the extreme and fully learn discriminative features in patch-level, we make the loss softer for a single patch, which can be formulated as in Eq.5.

(5)

where is the score about the i-th category at a specific location got by convolution, and are the top- elements of (

is a hyperparameter).

prompt the top

probabilities to be as great as possible, alleviating the extreme of softmax loss up to a point. But it is unsupervised learning to just adopt the softermax loss, leaving the model prone to fall into local optimum.

Hence we couple the softmax and softermax loss to get a trade-off. The loss for an image is averaged over its patches, which is shown in Eq.6, where determines how softer Eq.6 could be, and is the softmax loss supervised simply by the label of the input image.

(6)
(7)
Original Aspect Ratio Range (0,3) [3,6) [6,12] (12,)
New Aspect Ratio 2 4 8 16
TABLE I: Grouping Resizing with height=32 for SIW-13.

During training, the above losses contribute to the total loss by weights , as shown in Eq.7.

Iii Experiment

We conduct experiments on three public datasets for script identification.  SIW-13 [23] is officially split into 9,791 training and 6,500 test images of 13 scripts.   CVSI2015 [22] is released for the ICDAR 2015 Competition on Video Script Identification, containing text line images of 10 Indian scripts.   RRC-MLT2017 [17] is released for ICDAR 2017 Competition on MLT-Task2, comprising 68,613 training, 16,255 validation and 97,619 test cropped images. This dataset holds an extremely imbalanced distribution among 7 scripts and especially tilts to Latin. There exists some multi-oriented and curved texts which make it more challenging.

Iii-a Implementation Details

In terms of the diverse aspect ratios of the dataset images, we group every image by its aspect ratio and resize it to a fixed size determined by the group it belongs. The short side of all images are set to 32. Then we can train them with batches efficiently. The number of groups is determined by the dataset. To be clear, Table I shows the grouping resizing in SIW-13. For example, if an image has an aspect ratio of 3.5, we should resize it to size 32x128 where 32 is the fixed height. The same trick is used on CVSI2015 and RRC-MLT2017.

We also exploit some data augmentation like changing contrast, adding random noise, slightly cropping and making perspective transform to make full use of training data. Image data is normalized in range uniformly.

Our basic architecture uses VGG [26]

-style stacking, i.e., 3x3 convolution with 1 padding followed by Batch Normalization 

[14]

and ReLU. More details are shown in Table

II for SIW-13 and CVSI2015, where Module 1-6 are the shared convolutional part and GS stands at the left while PA is at the right. Note that ”1-6” means the first 6 modules have the same structure shown in the right but with different number of filters and parameters. We use kaiming normalization [11] to initialize it. As for RRC-MLT2017, we take the convolutional part of VGG16 [26]

pre-trained in ImageNet as the backbone due to the much more complex images. The design guarantees enough receptive field for patch-level prediction sores.

No. of Module Configuration
1-6
Conv kernel:

, stride:

, padding:
BatchNorm   ReLU  (3 kind of convolutional kernels for Module 1-2,3-4,5-6 respectively)
Pooling
MaxPooling kernel:, stride:   (placed after Module 2,4,6)
7
Conv kernel:, stride:, padding:
BatchNorm  ReLU
Conv kernel:, stride:, padding:
BatchNorm  ReLU
8
Conv kernel:, stride:, padding:
BatchNorm  ReLU
Conv kernel:, stride:, padding:
BatchNorm  ReLU
9
Linear:512   ReLU   Dropout(0.3)
Conv kernel:, stride:1, padding:0
BatchNorm  ReLU
10 Linear: Conv kernel:, stride:1, padding:0
11 -
Linear:32   ReLU
12 - Linear:
Fusion
Linear:1   Sigmoid
TABLE II: Architecture of our framework for SIW-13 and CVSI2015.
Method SIW-13 CVSI2015 RRC-MLT2017
Shi [25] 88.0 96.69 -
Shi [23] 89.4 94.30 -
Gomez [9] 94.8 97.20 -
Nicolaou [18] 83.7 98.18 -
Mei [16] 92.75 94.20 -
Bhunia [2] 96.5 97.75 -
Zdenek [30] 92.88 97.11 -
Patel [20] - - 88.54
ours 97.3 98.60 89.42
TABLE III: Results of our method on SIW-13, CVSI2015, RRC-MLT2017, as well as some other methods to be compared with.
Script Zdenek [30] Mei [16] Gomez [9] Bhunia [2] ours
Avg 92.88 92.75 94.8 96.5 97.3
Ara 97.0 96.2 98.0 99.0 98.6
Cam 96.8 93.4 99.2 99.0 98.6
Chi 91.3 94.0 88.4 92.0 95.6
Eng 80.5 83.6 97.0 98.0 94.0
Gre 84.6 89.4 99.8 100.0 96.4
Heb 94.6 93.8 96.2 99.0 96.8
Jap 93.4 91.8 92.6 98.0 95.2
Kan 94.7 91.8 88.6 92.0 98.0
Kor 97.5 95.6 89.4 93.0 99.6
Mon 97.7 97.0 94.6 98.0 98.8
Rus 82.1 87.0 95.0 93.0 94.8
Tha 97.2 93.6 94.8 95.0 98.4
Tib 99.2 98.6 98.2 97.0 99.8
Size - - 24 12 26.7
Speed 60 92 13 85 2.5
TABLE IV: Accuracies for all script types, Model size (MB) and test speed (ms per img) on SIW-13

In the experiments we have used PyTorch 

[19]

for deep learning acceleration. During training, hyper parameters for Eq.

5, Eq.6 and Eq.7 are: , , [

] = [0.1, 0.1, 1.0, 0.1], which can lead to the best accuracy. The batch size is 16. Stochastic gradient decent (SGD) is used for optimization with momentum and weight decay set to 0.9 and 1e-4 respectively. Learning rate starts with 0.1 and will decay by 0.3 if the training loss stop falling for a while. Every time it is lower than 8e-5, we reset it to 0.01 and going on training until the default epoch (500 for SIW-13 and CVSI2015, 100 for RRC-MLT2017) is reached. We conduct our experiment on an Nvidia Geforce GTX GPU with 10.9GB memory, one Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz and 64GB RAM. The training time is around 5 hours.

Script GS GS+GS GS+GMP PA GS+PA
Avg 96.2 96.3 96.5 94.5 97.3
Ara 98.2 98.4 97.8 96.8 98.6

Cam
95.6 96.8 97.2 94.8 98.6

Chi
94.8 95.0 94.6 94.4 95.6

Eng
91.6 90.4 92.4 89.8 94.0

Gre
95.0 96.6 96.4 92.2 96.4

Heb
96.4 96.4 97.2 95.2 96.8

Jap
95.6 95.6 95.6 92.4 95.2

Kan
96.2 97.4 98.0 96.4 98.0

Kor
98.6 98.0 98.4 96.8 99.6

Mon
98.6 98.0 98.4 98.2 98.8

Rus
92.8 92.2 90.8 87.0 94.8

Tha
98.0 98.0 98.2 95.8 98.4

Tib
99.6 99.4 99.8 99.2 99.8
TABLE V: Contribution of the proposed branches on SIW-13.
without with with
96.3 96.8 97.3
TABLE VI: Effects of the intermediate supervision and Softmax loss in PA on SIW-13.

Iii-B Results

The results on SIW-13, CVSI2015 and RRC-MLT2017 are displayed in Table III and Table IV.

Fig. 4: Visualization of the behaviour of PA in our experiments. Two difficult test samples are taken as examples. Note that there actually exists overlap between patch neighbors. (a)Some patches are likely to be Chinese, while some are Japanese. We can regard the two scripts as abstract high-level semantic features by performing GMP and make inference for further local prediction. So we can assert it is Japanese instead of Chinese. (b) The same logical flow as in (a). English and Russian are potentially the two high-level semantic features, and PA can accurately identify it as Russian.

As for scene text line images in SIW-13, a great improvement has been made with a good balance among all scripts. The Sequence-to-label problem demands more comprehensive feature representation than sequential dependency, which is proved by the comparison between ours and CRNN [16] which has a popular application in scene text recognition [24]. Besides, Mei [16] cost much time to predict a text line which may be caused by the sequential computation in RNN. Zdenek [30] used BLCT to enhance the discriminative features. But they imposed (inverse document frequency) to get codewords occurrences, suffering a lot from the impairment caused by less critical features. The image in Figure 4 which has much more Chinese patches than kana was misclassified to Chinese by their model. Bhunia [2] coupled local and global features, but it suffers from the impairment too. What is more, the use of many cropped patches can make considerably redundant computation and memory usage which can influence the efficiency especially in its LSTM module which precludes parallelization. Our model built with 26.7 MB parameters takes about 2.5 ms per image when we make test one by one, owing to the efficient matrix computation with a full-size image and the simple pipeline.

For CVSI2015, where the images with single background occurring in video caption, our method can reach the best accuracy among the published works. Former works are usually not able to have a proper balance between scene text and video caption, like Shi [25] and Nicolaou [18].

Our method also achieves the best performance on RRC-MLT2017, pushing itself to more complex scenarios and to be more practical. Besides the results shown in Table III, Bhunia [2] conducted their model on the validation set and achieved 90.23%, while our approach can reach up to 95.31% on it.

Iii-C Ablation Study

We conduct ablation studies on SIW-13 to show the power of our proposed PA along with GS and softermax loss.

Iii-C1 The contribution of the proposed module

Here we consider the contribution of PA by replacing the two-module (GS and PA) parts with other modules alternately while keeping the shared feature extractor.

Table V shows the results in detail, where GS means a single GS module is used without PA, and PA has the similar meaning. GS+GS is an ensemble model which can be regarded as that another GS takes the place of PA in Figure 3. GS+GMP just changes the GAP operation into GMP in one of the modules in GS+GS. GS+PA is the exact proposed method.

A single module is not enough for a fine-grained classifier to exploit information both globally and locally, which can be reflected by the result of GS and PA. GS cannot notice the fine-grained detail well and PA is prone to be limited in a sub-area. The ensemble model GS+GS only gets a slight improvement compared with GS, turning out that our proposed method should not attribute its great performance to ensemble. GS+GMP use GMP to extract the most remarkable responses in 512 dimensions which can be regarded as a kind of local features obtained in another way, but every dimension do not has an explicit meaning and cannot be supervised by label. So it only improves 0.3%, yet holding much more parameters. All of them highlight our proposed method, showing the power of integration of GS and PA.

Iii-C2 Effect of softermax loss

The proposed softermax loss mentioned in II-D1 is vital for PA in the training stage. We have investigated whether the supervision works and the importance of softermax loss.

Ablation results are shown in Table VI. The intermediate supervision makes great sense on the final accuracy, which can guide the mid-prediction to reach our expectation with explicit meaning. The weight in Eq.6 determines the influence of the softermax loss. “” in Table VI means that only softmax loss conducts the supervision. The result shows the significance of the softerness brought by softermax loss.

Iii-D Visualization Analysis

Insights into the behaviour of our proposed PA can be obtained by visualizing the vectors with

dimensions which are probability distributions over classes. Specifically, we take the patch-level prediction, vector after GMP and the local prediction from the linear classifier (fc) as the objects to observe.

As shown in Figure 4, predictions for patches are not forced to an extreme and the probabilities scatter relatively high over several(here is 3) scripts, which agrees with the fact that a patch alone regarded as an independent subsample from input can actually correspond to several scripts. By GMP, a vector consisting of the most remarkable response over classes is actually a kind of high-level semantic feature which shows what scripts the components of the input could be. The local-level prediction can be obtained by further inference which is exactly a simple linear classifier. We can make full use of fine-grained discriminative features through the proposed procedure.

Iv Conclusion

We present a simple but effective approach for scene text script identification. Patch Aggregator can learn discriminative features while having discriminatory power not been reduced by redundant features. It significantly improves the baseline model, Global Squeezer. The novel softermax loss is designed to make intermediate supervision on patch-level prediction. Our method achieves the best results on three benchmark datasets, demonstrating its great power.

Acknowledgment

This research was supported by the National Natural Science Foundation of China (NSFC) grants 61733007 and 61773176. Dr. Xiang Bai was supported by the National Program for Support of Top-notch Young Professionals and the Program for HUST Academic Frontier Youth Team.

References

  • [1] D. Bahdanau, K. Cho, and Y. Bengio (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. Cited by: §I, §II-B.
  • [2] A. K. Bhunia, A. Konwer, A. K. Bhunia, A. Bhowmick, P. P. Roy, and U. Pal (2019) Script identification in natural scene image and video frames using an attention based convolutional-lstm network. Pattern Recognition 85, pp. 172–184. Cited by: §I, §I, §III-B, §III-B, TABLE III, TABLE IV.
  • [3] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah (1994) Signature verification using a” siamese” time delay neural network. In Advances in neural information processing systems, pp. 737–744. Cited by: §I.
  • [4] A. Busch, W. W. Boles, and S. Sridharan (2005) Texture for script identification. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (11), pp. 1720–1732. Cited by: §I.
  • [5] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele (2016) The cityscapes dataset for semantic urban scene understanding. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 3213–3223. Cited by: §I.
  • [6] Y. Fujii, K. Driesen, J. Baccash, A. Hurst, and A. C. Popat (2017) Sequence-to-label script identification for multilingual ocr. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, Vol. 1, pp. 161–168. Cited by: §I.
  • [7] J. Gllavata and B. Freisleben (2005) Script recognition in images with complex backgrounds. In Signal Processing and Information Technology, 2005. Proceedings of the Fifth IEEE International Symposium on, pp. 589–594. Cited by: §I.
  • [8] L. Gomez and D. Karatzas (2016) A fine-grained approach to scene text script identification. In Document Analysis Systems (DAS), 2016 12th IAPR Workshop on, pp. 192–197. Cited by: §I, §I.
  • [9] L. Gomez, A. Nicolaou, and D. Karatzas (2017) Improving patch-based scene text script identification with ensembles of conjoined networks. Pattern Recognition 67, pp. 85–96. Cited by: §I, §I, TABLE III, TABLE IV.
  • [10] M. Hangarge and B. Dhandra (2010) Offline handwritten script identification in document images. Int. J. Comput. Appl 4 (6), pp. 6–10. Cited by: §I.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §III-A.
  • [12] J. Hochberg, K. Bowers, M. Cannon, and P. Kelly (1999) Script and language identification for handwritten document images. International Journal on Document Analysis and Recognition 2 (2-3), pp. 45–52. Cited by: §I.
  • [13] J. Hochberg, P. Kelly, T. Thomas, and L. Kerns (1997) Automatic script identification from document images using cluster-based templates. IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (2), pp. 176–181. Cited by: §I.
  • [14] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §III-A.
  • [15] G. D. Joshi, S. Garg, and J. Sivaswamy (2007) A generalised framework for script identification. International Journal of Document Analysis and Recognition (IJDAR) 10 (2), pp. 55–68. Cited by: §I.
  • [16] J. Mei, L. Dai, B. Shi, and X. Bai (2016) Scene text script identification with convolutional recurrent neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 4053–4058. Cited by: §I, §III-B, TABLE III, TABLE IV.
  • [17] N. Nayef, F. Yin, I. Bizid, H. Choi, Y. Feng, D. Karatzas, Z. Luo, U. Pal, C. Rigaud, J. Chazalon, et al. (2017) ICDAR2017 robust reading challenge on multi-lingual scene text detection and script identification-rrc-mlt. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, Vol. 1, pp. 1454–1459. Cited by: item 3, §III.
  • [18] A. Nicolaou, A. D. Bagdanov, L. Gómez, and D. Karatzas (2016) Visual script and language identification. In Document Analysis Systems (DAS), 2016 12th IAPR Workshop on, pp. 393–398. Cited by: §III-B, TABLE III.
  • [19] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §III-A.
  • [20] Y. Patel, M. Bušta, and J. Matas (2018) E2E-mlt-an unconstrained end-to-end method for multi-language scene text. arXiv preprint arXiv:1801.09919. Cited by: TABLE III.
  • [21] T. Q. Phan, P. Shivakumara, Z. Ding, S. Lu, and C. L. Tan (2011) Video script identification based on text lines. In Document Analysis and Recognition (ICDAR), 2011 International Conference on, pp. 1240–1244. Cited by: §I.
  • [22] N. Sharma, R. Mandal, R. Sharma, U. Pal, and M. Blumenstein (2015) ICDAR2015 competition on video script identification (cvsi 2015). In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pp. 1196–1200. Cited by: item 3, §III.
  • [23] B. Shi, X. Bai, and C. Yao (2016)

    Script identification in the wild via discriminative convolutional neural network

    .
    Pattern Recognition 52, pp. 448–458. Cited by: item 3, TABLE III, §III.
  • [24] B. Shi, X. Bai, and C. Yao (2017) An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE transactions on pattern analysis and machine intelligence 39 (11), pp. 2298–2304. Cited by: §I, §III-B.
  • [25] B. Shi, C. Yao, C. Zhang, X. Guo, F. Huang, and X. Bai (2015) Automatic script identification in the wild. In Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, pp. 531–535. Cited by: §I, §III-B, TABLE III.
  • [26] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §III-A.
  • [27] S. Singh, A. Gupta, and A. A. Efros (2012) Unsupervised discovery of mid-level discriminative patches. In Computer Vision–ECCV 2012, pp. 73–86. Cited by: §I.
  • [28] T. Tan (1998) Rotation invariant texture features and their use in automatic script identification. IEEE Transactions on pattern analysis and machine intelligence 20 (7), pp. 751–756. Cited by: §I.
  • [29] Y. Wang, V. I. Morariu, and L. S. Davis (2018) Learning a discriminative filter bank within a cnn for fine-grained recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4148–4157. Cited by: §I.
  • [30] J. Zdenek and H. Nakayama (2017) Bag of local convolutional triplets for script identification in scene text. In Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on, Vol. 1, pp. 369–375. Cited by: §I, §I, §III-B, TABLE III, TABLE IV.
  • [31] X. Zhang, H. Xiong, W. Zhou, W. Lin, and Q. Tian (2016) Picking deep filter responses for fine-grained image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1134–1142. Cited by: §II-A.
  • [32] D. Zhao, P. Shivakumara, S. Lu, and C. L. Tan (2012) New spatial-gradient-features for video script identification. In Document Analysis Systems (DAS), 2012 10th IAPR International Workshop on, pp. 38–42. Cited by: §I.
  • [33] H. Zheng, J. Fu, T. Mei, and J. Luo (2017) Learning multi-attention convolutional neural network for fine-grained image recognition. In Int. Conf. on Computer Vision, Vol. 6. Cited by: §I.