1 Introduction
Many applications of machine learning, and most recently computer vision, have been disrupted by the use of convolutional neural networks (CNNs). The combination of an endtoend learning system with minimal need for human design decisions, and the ability to efficiently train large and complex models, have allowed them to achieve stateoftheart performance in a number of benchmarks [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton, Taigman et al.(2014)Taigman, Yang, Ranzato, and Wolf, Toshev and Szegedy(2013), Alsharif and Pineau(2014), Goodfellow et al.(2013a)Goodfellow, Bulatov, Ibarz, Arnoud, and Shet, Sermanet et al.(2013)Sermanet, Eigen, Zhang, Mathieu, Fergus, and LeCun, Razavian et al.(2014)Razavian, Azizpour, Sullivan, and Carlsson]. However, these high performing CNNs come with a large computational cost due to the use of chains of several convolutional layers, often requiring implementations on GPUs [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton, Jia(2013)] or highly optimized distributed CPU architectures [Vanhoucke et al.(2011)Vanhoucke, Senior, and Mao] to process large datasets. The increasing use of these networks for detection in sliding window approaches [Sermanet et al.(2013)Sermanet, Eigen, Zhang, Mathieu, Fergus, and LeCun, Farabet et al.(2012)Farabet, Couprie, Najman, and LeCun, Oquab et al.(2014)Oquab, Bottou, Laptev, and Sivic] and the desire to apply CNNs in realworld systems means the speed of inference becomes an important factor for applications. In this paper we introduce an easytoimplement method for significantly speeding up pretrained CNNs requiring minimal modifications to existing frameworks. There can be a small associated loss in performance, but this is tunable to a desired accuracy level. For example, we show that a 4.5 speedup can still give stateoftheart performance in our example application of character recognition.
While a few other CNN acceleration methods exist, our key insight is to exploit the redundancy that exists between different feature channels and filters [Denil et al.(2013)Denil, Shakibi, Dinh, and de Freitas]. We contribute two approximation schemes to do so (Sect. 2) and two optimization methods for each scheme (Sect. 2.2). Both schemes are orthogonal to other architecturespecific optimizations and can be easily applied to existing CPU and GPU software. Performance is evaluated empirically in Sect. 3 and results are summarized in Sect 4.
Related work.
There are only a few general speedup methods for CNNs. Denton et al [Denton et al.(2014)Denton, Zaremba, Bruna, LeCun, and Fergus] use low rank approximations and clustering of filters achieving 1.6 speedup of single convolutional layers (not of the whole network) with a 1% drop in classification accuracy. Mamalet et al [Mamalet and Garcia(2012)] design the network to use rank1 filters from the outset and combine them with an average pooling layer; however, the technique cannot be applied to general network designs. Vanhoucke et al [Vanhoucke et al.(2011)Vanhoucke, Senior, and Mao] show that 8bit quantization of the layer weights can result in a speedup with minimal loss of accuracy. Not specific to CNNs, Rigamonti et al [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua] show that multiple image filters can be approximated by a shared set of separable (rank1) filters, allowing large speedups with minimal loss in accuracy.
Moving to hardwarespecific optimizations, cudaconvnet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] and Caffe [Jia(2013)] show that highly optimized CPU and GPU code can give superior computational performance in CNNs. [Mathieu et al.(2013)Mathieu, Henaff, and LeCun] performs convolutions in the Fourier domain through FFTs computed efficiently over batches of images on a GPU. Other methods from [Vanhoucke et al.(2011)Vanhoucke, Senior, and Mao] show that specific CPU architectures can be taken advantage of, e.gby using SSSE3 and SSSE4 fixedpoint instructions and appropriate alignment of data in memory. Farabet et al [Farabet et al.(2011)Farabet, LeCun, Kavukcuoglu, Culurciello, Martini, Akselrod, and Talay] show that using bespoke FPGA implementations of CNNs can greatly increase processing speed.
To speed up testtime in a sliding window context for a CNN, [Iandola et al.(2014)Iandola, Moskewicz, Karayev, Girshick, Darrell, and Keutzer] shows that multiscale features can be computed efficiently by simply convolving the CNN across a flattened multiscale pyramid. Furthermore search space reduction techniques such as selective search [van de Sande et al.(2011)van de Sande, Uijlings, Gevers, and Smeulders] drastically cut down the number of times a full forward pass of the CNN must be computed by cheaply identifying a small number of candidate object locations in the image.
Note, the methods we proposed are not specific to any processing architecture and can be combined with many of the other speedup methods given above.
2 Filter Approximations
Filter banks are used widely in computer vision as a method of feature extraction, and when used in a convolutional manner, generate
feature maps from input images. For an input , the set of output feature maps , are generated by convolving with filters such that . The collection of filters can be learnt, for example, through dictionary learning methods [Kavukcuoglu et al.(2010)Kavukcuoglu, Sermanet, Boureau, Gregor, Mathieu, and LeCun, Lee et al.(2009)Lee, Grosse, Ranganath, and Ng, Rigamonti et al.(2011)Rigamonti, Brown, and Lepetit] or CNNs, and are generally full rank and expensive to convolve with large images. Using a direct implementation of convolution, the complexity of convolving a single channel input image with a bank of 2D filters of size is . We next introduce our method for accelerating this computation that takes advantage of the fact that there exists significant redundancy between different filters and feature channels.One way to exploit this redundancy is to approximate the filter set by a linear combination of a smaller basis set of filters [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua, Song et al.(2012)Song, Zickler, Althoff, Girshick, Fritz, Geyer, Felzenszwalb, and Darrell, Song et al.(2013)Song, Darrell, and Girshick]. The basis filter set is used to generate basis feature maps which are then linearly combined such that . This can lead to a speedup in feature map computation as a smaller number of filters need be convolved with the input image, and the final feature maps are composed of a cheap linear combination of these. The complexity in this case is , so a speedup can be achieved if .
As shown in Rigomonti et al [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua], further speedups can be achieved by choosing the filters in the approximating basis to be rank1 and so making individual convolutions separable. This means that each basis filter can be decomposed in to a sequence of horizontal and vertical filters where , , and . Using this decomposition, the convolution of a separable filter can be performed in operations instead of .
The separable filters of [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua] are a lowrank approximation, but performed in the spatial filter dimensions. Our key insight is that in CNNs substantial speedups can be achieved by also exploiting the crosschannel redundancy to perform lowrank decomposition in the channel dimension as well. We explore both of these lowrank approximations in the sequel.
Note that the FFT [Mathieu et al.(2013)Mathieu, Henaff, and LeCun]
could be used as an alternative speedup method to accelerate individual convolutions in combination with our lowrank crosschannel decomposition scheme. However, separable convolutions have several practical advantages: they are significantly easier to implement than a well tuned FFT implementation, particularly on GPUs; they do not require feature maps to be padded to a special size, such as a power of two as in
[Mathieu et al.(2013)Mathieu, Henaff, and LeCun]; they are far more memory efficient; and, they yield a good speedup for small image and filter sizes too (which can be common in CNNs), whilst FFT acceleration tends to be better for large filters due to the overheads incurred in computing the FFTs.2.1 Approximating Convolutional Neural Network Filter Banks
CNNs are obtained by stacking multiple layers of convolutional filter banks on top of each other, followed by a nonlinear response function. Each filter bank or convolutional layer takes an input which is a feature map where are spatial coordinates and contains scalar features or channels . The output is a new feature map such that where and denote the th filter kernel and bias respectively, and
is a nonlinear activation function such as the
Rectified Linear Unit(ReLU)
. Convolutional layers can be intertwined with normalization, subsampling, and pooling layers which build translation invariance in local neighbourhoods. Other layer types are possible as well, but generally the convolutional ones are the most expensive. The process starts with , whereis the input image, and ends by, for example, connecting the last feature map to a logistic regressor in the case of classification. All the parameters of the model are jointly optimized to minimize a loss over the training set using Stochastic Gradient Descent (SGD) with backpropagation.
The filters learnt for each layer (for convenience we drop the layer subscript ) are full rank, 3D filters with the same depth as the number of channels of the input, such that . For example, for a 3channel color image input, . The convolution of a 3D filter with the 3D image is the 2D image , where is a single channel of the filter. This is a sum of 2D convolutions so we can think of each 3D filter as being a collection of 2D filters, whose output is collapsed to a 2D signal. However, since such 3D filters are applied to , the overall output is a new 3D image with channels. This process is illustrated for the case in Fig. 1 (a). The resulting computational cost for a convolutional layer with filters of size acting on input channels is .
We now propose two schemes to approximate a convolutional layer of a CNN to reduce the computational complexity and discuss their training in Sec. 2.2. Both schemes follow the same intuition: that CNN filter banks can be approximated using a low rank basis of filters that are separable in the spatial domain.
Scheme 1.
The first method for speeding up convolutional layers is to directly apply the method suggested in Sect. 2 to the filters of a CNN (Fig. 1 (b)). As described above, a single convolutional layer with filters requires evaluating 2D filters . Note that there are filters operating on each input channel . These can be approximated as linear combinations of a basis of (separable) filters as . Since convolution is a linear operator, filter reconstruction and image convolution can be swapped, yielding the approximation . To summarize, the direct calculation involves computing 2D filters with cost , while the approximation involves computing 2D filters with cost – the additional term accounting for the need to recombine the basis response linearly. Due to the second term, the approximation is efficient only if , i.eif the number of filters in the basis is less than the filter area.
The first cost term would also suggest that efficiency requires the condition ; however, this can be considerably ameliorated by using separable filters in the basis. In this case the approximation cost is reduced to ; together with the former condition, Scheme 1 is then efficient if .
Note that this scheme uses filter basis as each operates on a different input channel. In practice, we choose because empirically there is no actual gain in performance and a single channel basis is simpler and more compact.
Scheme 2.
Scheme 1 focuses on approximating 2D filters. As a consequence, each input channel is approximated by a particular basis of 2D separable filters. Redundancy among feature channels is exploited, but only in the sense of the output channels. In contrast, Scheme 2 is designed to take advantage of both input and output redundancies by considering 3D filters throughout. The idea is simple: each convolutional layer is factored as a sequence of two regular convolutional layers but with rectangular (in the spatial domain) filters, as shown in Fig. 1 (c). The first convolutional layer has filters of spatial size resulting in a filter bank and producing output feature maps such that . The second convolutional layer has filters of spatial size resulting in a filter bank . Differently from Scheme 1, the filters operate on multiple channels simultaneously. The rectangular shape of the filters is selected to match a separable filter approximation. To see this, note that convolution by one of the original filters is approximated by
(1) 
which is the sum of separable filters . The computational cost of this scheme is for the first vertical filters and for the second horizontal filter. Assuming that the image width is significantly larger than the filter size, the output image width is about the same as the input image width . Hence the total cost can be simplified to . Compared to the direct convolution cost of , this scheme is therefore convenient provided that . For example, if , , and are of the same order, the speedup is about times.
In both schemes, we are assuming that the full rank original convolutional filter bank can be decomposed in to a linear combination of a set of separable basis filters. The difference between the schemes is how/where they model the interaction between input and output channels, which amounts to how the low rank channel space approximation is modelled. In Scheme 1 it is done with the linear combination layer, whereas with Scheme 2 the channel interaction is modelled with 3D vertical and horizontal filters inducing a summation over channels as part of the convolution.
2.2 Optimization
This section deals with the details on how to attain the optimal separable basis representation of a convolutional layer for the schemes. The first method (Sec. 2.2.1) aims to reconstruct the original filters directly by minimizing filter reconstruction error. The second method (Sec. 2.2.2) approximates the convolutional layer indirectly, by minimizing reconstruction error of the output of the layer.
2.2.1 Filter Reconstruction Optimization
The first way that we can attain the separable basis representation is to aim to minimize the reconstruction error of the original filters with our new representation.
Scheme 1.
The separable basis can be learnt simply by minimizing the reconstruction error of the original filters, whilst penalizing the nuclear norm of the filters . In fact, the nuclear norm is a proxy for the rank of and rank1 filters are separable. This yields the formulation:
(2) 
This minimization is biconvex, so given a unique can be found, therefore a minimum is found by alternating between optimizing and . For full details of the implementation of this optimization see [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua].
Scheme 2.
The set of horizontal and vertical filters can be learnt by explicitly minimizing the reconstruction error of the original filters. From (1) we can see that the original filter can be approximated by minimizing the objective function
(3) 
This optimization is simpler than for Scheme 1 due to the lack of nuclear norm constraints, which we are able to avoid by modelling the separability explicitly with two variables. We perform conjugate gradient descent, alternating between optimizing the horizontal and vertical filter sets.
2.2.2 Data Reconstruction Optimization
The problem with optimizing the separable basis through minimizing original filter reconstruction error is that this does not necessarily give the most optimized basis set for the end CNN prediction performance. As an alternative, one can optimize a scheme’s separable basis by aiming to reconstruct the outputs of the original convolutional layer given training data. For example, for Scheme 2 this amounts to
(4) 
where is the index of the convolutional layer to be approximated and is the evaluation of the CNN up to and including layer on data sample where is the set of training examples. This optimization can be done quite elegantly by simply mirroring the CNN with the unoptimized separable basis layers, and training only the approximation layer by backpropagating the error between the output of the original layer and the output of the approximation layer (see Fig. 2). This is done layer by layer.
(a)  (b) 
There are two main advantages of this method for optimization of the approximation schemes. The first is that the approximation is conditioned on the manifold of the training data – original filter dimensions that are not relevant or redundant in the context of the training data will by ignored by minimizing data reconstruction error, but will still be penalised by minimizing filter reconstruction error (Sec. 2.2.1) and therefore uselessly using up model capacity. Secondly, stacks of approximated layers can be learnt to incorporate the approximation error of previous layers by feeding the data through the approximated net up to layer rather than the original net up to layer (see Fig. 2 (b)). This additionally means that all the approximation layers could be optimized jointly with backpropagation.
An obvious alternative optimization strategy would be to replace the original convolutional layers with the unoptimized approximation layers and train just those layers by backpropagating the classification error of the approximated CNN. However, this does not actually result in better classification accuracy than doing data reconstruction optimization – in practice, optimizing the separable basis within the full network leads to overfitting of the training data, and attempts to minimize this overfitting through regularization methods like dropout [Hinton et al.(2012)Hinton, Srivastava, Krizhevsky, Sutskever, and Salakhutdinov] lead to underfitting, most likely due to the fact that we are already trying to heavily approximate our original filters. However, this is an area that needs to be investigated in more detail.
3 Experiments & Results
In this section we demonstrate the application of both proposed filter approximation schemes and show that we can achieve large speedups with a very small drop in accuracy. We use a pretrained CNN that performs caseinsensitive character classification of scene text. Character classification is an essential part of many text spotting pipelines such as [Quack(2009), Posner et al.(2010)Posner, Corke, and Newman, Yang et al.(2012)Yang, Quehl, and Sack, Neumann and Matas(2010), Neumann and Matas(2011), Neumann and Matas(2012), Wang et al.(2011)Wang, Babenko, and Belongie, Neumann and Matas(2013), Alsharif and Pineau(2014), Bissacco et al.(2013)Bissacco, Cummins, Netzer, and Neven].
We first give the details of the base CNN model used for character classification which will be subject to speedup approximations. The optimization processes and how we attain the approximations of Scheme 1 & 2 to this model are given, and finally we discuss the results of the separable basis approximation methods on accuracy and inference time of the model.
Test Model.
For scene character classification, we use a four layer CNN with a softmax output. The CNN outputs a probability distribution
over an alphabet which includes all 26 letters and 10 digits, as well as a noise/background (notext) class, with being a grey input image patch of sizepixels, which has been zerocentred and normalized by subtracting the patch mean and dividing by the standard deviation. The nonlinearity used between convolutional layers is maxout
[Goodfellow et al.(2013b)Goodfellow, WardeFarley, Mirza, Courville, and Bengio] which amounts to taking the maximum response over a number of linear models e.gthe maxout of two feature channels and is simply their pointwise maximum: . Table 1 gives the details of the layers for the model used, which is connected in the linear arrangement Conv1Conv2Conv3Conv4Softmax.Layer name  Filter size  In channels  Out channels  Filters  Maxout groups  Time 

Conv1  1  48  96  2  0.473ms (8.3%)  
Conv2  48  64  128  2  3.008ms (52.9%)  
Conv3  64  128  512  4  2.160ms (38.0%)  
Conv4  128  37  148  4  0.041ms (0.7%)  
Softmax    37  37      0.004ms (0.1%) 
Datasets & Evaluation.
The training dataset consists of 163,222 collected character samples from a number of scene text and synthesized character datasets [icd(), Lucas(2005), Shahab et al.(2011)Shahab, Shafait, and Dengel, Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, Mestre, Mas, Mota, Almazan, de las Heras, et al., kai(), de Campos et al.(2009)de Campos, Babu, and Varma, Wang et al.(2012)Wang, Wu, Coates, and Ng]. The test set is the collection of 5379 cropped characters from the ICDAR 2003 training set after removing all nonalphanumeric characters as in [Wang et al.(2011)Wang, Babenko, and Belongie, Alsharif and Pineau(2014)]
. We evaluate the caseinsensitive accuracy of the classifier, ignoring the background class. The Test Model achieves stateoftheart results of 91.3% accuracy compared to the next best result of 89.8%
[Alsharif and Pineau(2014)].Implementation Details.
The CNN framework we use is the CPU implementation of Caffe [Jia(2013)], where convolutions are performed by constructing a matrix of filter windows of the input, im2col, and using BLAS for the matrixmatrix multiplication between the filters and data windows. We found this to be the fastest CPU CNN implementation attainable. CNN training is done with SGD with momentum of 0.9 and weight decay of 0.0005. Dropout of 0.5 is used on all layers except Conv1 to regularize the weights, and the learning rate is adaptively reduced during the course of training.
For filter reconstruction optimization, we optimize a separable basis until a stable minimum of reconstruction error is reached. For data reconstruction optimization, we optimize each approximated layer in turn, and can incorporate a finetuning with joint optimization.
For the CNN presented, we only approximate layers Conv2 and Conv3. This is because layer Conv4 has a filter size and so would not benefit much from our speedup schemes. We also don’t approximate Conv1 due to the fact that it acts on raw pixels from natural images – the filters in Conv1 are very different to those found in the rest of the network and experimentally we found that they cannot be approximated well by separable filters (also observed in [Denton et al.(2014)Denton, Zaremba, Bruna, LeCun, and Fergus]). Omitting layers Conv1 and Conv4 from the schemes does not change overall network speedup significantly, since Conv2 and Conv3 constitute 90% of the overall network processing time, as shown in Table. 1.
Layerwise Performance.
Fig. 3 shows the output reconstruction error of each approximated layer with the test data. It is clear that the reconstruction error worsens as the speedup achieved increases, both theoretically and practically. As the reconstruction error is that of the test data features fed through the approximated layers, as expected the data reconstruction optimization scheme gives lower errors for the same speedup compared to the filter reconstruction. This generally holds even when completely random Gaussian noise data is fed through the approximated layers – data from a completely different distribution to what the data optimization scheme has been trained on.
Looking at the theoretical speedups possible in Fig. 3, Scheme 1 gives better reconstruction error to speedup ratio, suggesting that the Scheme 1 model is perhaps better suited for approximating convolutional layers. However, when the actual measured speedups are compared, Scheme 1 is actually slower than that of Scheme 2 for the same reconstruction error. This is due to the fact that the Caffe convolution routine is optimized for 3D convolution (summing over channels), so Scheme 2 requires only two im2col and BLAS calls. However, to implement Scheme 1 with Caffe style convolution involving perchannel convolution without channel summation, means that there are many more costly im2col and BLAS calls, thus slowing down the layer evaluation and negating the model approximation speedups. It is possible that using a different convolution routine with Scheme 1 will bring the actual timings closer to the theoretically achievable timings.
Full Net Performance.
Fig. 4 (b) & (c) show the overall drop in accuracy as the speedup of the endtoend network increases under different optimization strategies. Generally, joint data optimization of Conv2 and Conv3 improves final classification performance for a given speedup. Under Scheme 2 we can achieve a 2.5 speedup with no loss in accuracy, and a 4.5 speedup with only a drop of 1% in classification accuracy, giving 90.3% accuracy – still stateoftheart for this benchmark. The 4.5 configuration is obtained by approximating the original 128 Conv2 filters with 31 horizontal filters followed by 128 vertical filters, and the original 512 Conv3 filters with 26 horizontal filters followed by 512 vertical filters.
This speedup is incredibly useful for sliding window schemes, allowing fast generation of, for example, detection maps such as the character detection map shown in Fig. 5. There is very little difference with even a 3.5 speedup, and when incorporated in to a full application pipeline, the speedup can be tuned to give an acceptable end pipeline result.
Comparing to an FFT based CNN [Mathieu et al.(2013)Mathieu, Henaff, and LeCun], our method can actually give greater speedups. With the same layer setup (55 kernel, input, 384 filters), Scheme 2 gives an actual 2.4 speedup with 256 basis filters (which should result in no performance drop), compared to in [Mathieu et al.(2013)Mathieu, Henaff, and LeCun]. Comparing with [Denton et al.(2014)Denton, Zaremba, Bruna, LeCun, and Fergus], simply doing a filter reconstruction approximation with Scheme 2 of the second layer of OverFeat [Sermanet et al.(2013)Sermanet, Eigen, Zhang, Mathieu, Fergus, and LeCun] gives a 2
theoretical speedup with only 0.5% drop in top5 classification accuracy on ImageNet, far better than the 1.2% drop in accuracy for the same theoretical speedup reported in
[Denton et al.(2014)Denton, Zaremba, Bruna, LeCun, and Fergus]. This accuracy should be further improved if data optimization is used.4 Conclusions
In this paper we have shown that the redundancies in representation in CNN convolutional layers can be exploited by approximating a learnt full rank filter bank as combinations of a rank1 filter basis. We presented two schemes to do this, with two optimization techniques for attaining the approximations. The resulting approximations require significantly less operations to compute, resulting in large speedups observed with a real CNN trained for scene text character recognition: a 4.5 speedup, only a drop of 1% in classification accuracy.
In future work it would be interesting to experiment with other arrangements of separable filters in layers, e.ga horizontal basis layer, followed by a vertical basis layer, followed by a linear combination layer. Looking at the filter reconstructions of the two schemes in Fig. 4 (a), it is obvious that the two presented schemes act very differently, so the connection between different approximation structures could be explored. Also it should be further investigated whether these model approximations can be effectively taken advantage of during training, with lowrank filter layers being learnt in a discriminative manner.
Acknowledgements.
Funding for this research is provided by the EPSRC and ERC grant VisRec no. 228180.
References
 [icd()] http://algoval.essex.ac.uk/icdar/datasets.html.
 [kai()] http://www.iaprtc11.org/mediawiki/index.php/kaist_scene_text_database.
 [Alsharif and Pineau(2014)] O. Alsharif and J. Pineau. EndtoEnd Text Recognition with Hybrid HMM Maxout Models. In International Conference on Learning Representations, 2014.
 [Bissacco et al.(2013)Bissacco, Cummins, Netzer, and Neven] A. Bissacco, M. Cummins, Y. Netzer, and H. Neven. PhotoOCR: Reading text in uncontrolled conditions. In International Conference of Computer Vision, 2013.
 [de Campos et al.(2009)de Campos, Babu, and Varma] T. de Campos, B. R. Babu, and M. Varma. Character recognition in natural images. 2009.

[Denil et al.(2013)Denil, Shakibi, Dinh, and de Freitas]
M. Denil, B. Shakibi, L. Dinh, and N. de Freitas.
Predicting parameters in deep learning.
In Advances in Neural Information Processing Systems, pages 2148–2156, 2013.  [Denton et al.(2014)Denton, Zaremba, Bruna, LeCun, and Fergus] E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. arXiv preprint arXiv:1404.0736, 2014.
 [Farabet et al.(2011)Farabet, LeCun, Kavukcuoglu, Culurciello, Martini, Akselrod, and Talay] C. Farabet, Y. LeCun, K. Kavukcuoglu, E. Culurciello, B. Martini, P. Akselrod, and S. Talay. Largescale fpgabased convolutional networks. Machine Learning on Very Large Data Sets, 2011.
 [Farabet et al.(2012)Farabet, Couprie, Najman, and LeCun] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Scene parsing with multiscale feature learning, purity trees, and optimal covers. arXiv preprint arXiv:1202.2160, 2012.
 [Goodfellow et al.(2013a)Goodfellow, Bulatov, Ibarz, Arnoud, and Shet] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet. Multidigit number recognition from street view imagery using deep convolutional neural networks. In International Conference on Learning Representations, 2013a.
 [Goodfellow et al.(2013b)Goodfellow, WardeFarley, Mirza, Courville, and Bengio] I. J. Goodfellow, D. WardeFarley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. arXiv preprint arXiv:1302.4389, 2013b.
 [Hinton et al.(2012)Hinton, Srivastava, Krizhevsky, Sutskever, and Salakhutdinov] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
 [Iandola et al.(2014)Iandola, Moskewicz, Karayev, Girshick, Darrell, and Keutzer] F. Iandola, M. Moskewicz, S. Karayev, R. Girshick, T. Darrell, and K. Keutzer. Densenet: Implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869, 2014.

[Jia(2013)]
Y. Jia.
Caffe: An open source convolutional architecture for fast feature embedding.
http://caffe.berkeleyvision.org/, 2013.  [Karatzas et al.(2013)Karatzas, Shafait, Uchida, Iwamura, Mestre, Mas, Mota, Almazan, de las Heras, et al.] D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, S. R. Mestre, J. Mas, D. F. Mota, J. Almazan, L. P. de las Heras, et al. ICDAR 2013 robust reading competition. In Document Analysis and Recognition (ICDAR), 2013 12th International Conference on, pages 1484–1493. IEEE, 2013.
 [Kavukcuoglu et al.(2010)Kavukcuoglu, Sermanet, Boureau, Gregor, Mathieu, and LeCun] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierarchies for visual recognition. In NIPS, volume 1, page 5, 2010.
 [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, volume 1, page 4, 2012.

[Lee et al.(2009)Lee, Grosse, Ranganath, and Ng]
H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng.
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations.
In Proceedings of the 26th Annual International Conference on Machine Learning, pages 609–616. ACM, 2009.  [Lucas(2005)] S. Lucas. ICDAR 2005 text locating competition results. In Document Analysis and Recognition, 2005. Proceedings. Eighth International Conference on, pages 80–84. IEEE, 2005.
 [Mamalet and Garcia(2012)] F. Mamalet and C. Garcia. Simplifying convnets for fast learning. In Artificial Neural Networks and Machine Learning–ICANN 2012, pages 58–65. Springer, 2012.
 [Mathieu et al.(2013)Mathieu, Henaff, and LeCun] M. Mathieu, M. Henaff, and Y. LeCun. Fast training of convolutional networks through ffts. CoRR, abs/1312.5851, 2013.
 [Neumann and Matas(2010)] L. Neumann and J. Matas. A method for text localization and recognition in realworld images. In Proc. Asian Conf. on Computer Vision, pages 770–783. Springer, 2010.
 [Neumann and Matas(2011)] L. Neumann and J. Matas. Text localization in realworld images using efficiently pruned exhaustive search. In Proc. ICDAR, pages 687–691. IEEE, 2011.
 [Neumann and Matas(2012)] L. Neumann and J. Matas. Realtime scene text localization and recognition. In Proc. CVPR, volume 3, pages 1187–1190. IEEE, 2012.
 [Neumann and Matas(2013)] L. Neumann and J. Matas. Scene text localization and recognition with oriented stroke detection. In 2013 IEEE International Conference on Computer Vision (ICCV 2013), pages 97–104, California, US, December 2013. IEEE. ISBN 9781479928392. doi: 10.1109/ICCV.2013.19.

[Oquab et al.(2014)Oquab, Bottou, Laptev, and Sivic]
M. Oquab, L. Bottou, I. Laptev, and J. Sivic.
Learning and transferring midlevel image representations using
convolutional neural networks.
In
Computer Vision and Pattern Recognition (CVPR)
, 2014.  [Posner et al.(2010)Posner, Corke, and Newman] I. Posner, P. Corke, and P. Newman. Using textspotting to query the world. In Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2010.
 [Quack(2009)] T. Quack. Large scale mining and retrieval of visual data in a multimodal context. PhD thesis, ETH Zurich, 2009.
 [Razavian et al.(2014)Razavian, Azizpour, Sullivan, and Carlsson] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features offtheshelf: an astounding baseline for recognition. arXiv preprint arXiv:1403.6382, 2014.
 [Rigamonti et al.(2011)Rigamonti, Brown, and Lepetit] R. Rigamonti, M. A. Brown, and V. Lepetit. Are sparse representations really relevant for image classification? In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1545–1552. IEEE, 2011.
 [Rigamonti et al.(2013)Rigamonti, Sironi, Lepetit, and Fua] R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua. Learning separable filters. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2754–2761. IEEE, 2013.
 [Sermanet et al.(2013)Sermanet, Eigen, Zhang, Mathieu, Fergus, and LeCun] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013.
 [Shahab et al.(2011)Shahab, Shafait, and Dengel] A. Shahab, F. Shafait, and A. Dengel. ICDAR 2011 robust reading competition challenge 2: Reading text in scene images. In Proc. ICDAR, pages 1491–1496. IEEE, 2011.
 [Song et al.(2012)Song, Zickler, Althoff, Girshick, Fritz, Geyer, Felzenszwalb, and Darrell] H. O. Song, S. Zickler, T. Althoff, R. Girshick, M. Fritz, C. Geyer, P. Felzenszwalb, and T. Darrell. Sparselet models for efficient multiclass object detection. In Computer Vision–ECCV 2012, pages 802–815. Springer, 2012.
 [Song et al.(2013)Song, Darrell, and Girshick] H. O. Song, T. Darrell, and R. B. Girshick. Discriminatively activated sparselets. In Proceedings of the 30th International Conference on Machine Learning (ICML13), pages 196–204, 2013.
 [Taigman et al.(2014)Taigman, Yang, Ranzato, and Wolf] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. DeepFace: Closing the gap to humanlevel performance in face verification. In IEEE CVPR, 2014.
 [Toshev and Szegedy(2013)] A. Toshev and C. Szegedy. DeepPose: Human pose estimation via deep neural networks. arXiv preprint arXiv:1312.4659, 2013.
 [van de Sande et al.(2011)van de Sande, Uijlings, Gevers, and Smeulders] K. van de Sande, J. Uijlings, T. Gevers, and A. Smeulders. Segmentation as selective search for object recognition. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 1879–1886. IEEE, 2011.
 [Vanhoucke et al.(2011)Vanhoucke, Senior, and Mao] V. Vanhoucke, A. Senior, and M. Z. Mao. Improving the speed of neural networks on cpus. In Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop, 2011.
 [Wang et al.(2011)Wang, Babenko, and Belongie] K. Wang, B. Babenko, and S. Belongie. Endtoend scene text recognition. In Proc. ICCV, pages 1457–1464. IEEE, 2011.
 [Wang et al.(2012)Wang, Wu, Coates, and Ng] T. Wang, D. J. Wu, A. Coates, and A. Y. Ng. Endtoend text recognition with convolutional neural networks. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 3304–3308. IEEE, 2012.
 [Yang et al.(2012)Yang, Quehl, and Sack] H. Yang, B. Quehl, and H. Sack. A framework for improved video text detection and recognition. In Int. Journal of Multimedia Tools and Applications (MTAP), 2012.