1 Introduction
Deep Architectures and, in particular, convolutional neural networks (ConvNets) have experienced great success in recent years. However, while being able to successfully tackle a wide range of challenging problems, current architectures are often limited by the need for large amounts of memory and computational capacity.
In this paper, we set out to alleviate these issues where possible and, in particular, we consider ConvNets used for computer vision tasks, as typically the issues of memory and computation become paramount in that context. Networks useful for realworld tasks may sometimes require as much as a few hundred million parameters [1] to produce stateoftheart results which increases the memory footprint as well as the computational need. Unfortunately, this means that it is hard to deploy applications where memory and computational resources are relevant such as portable devices. In this work, we demonstrate that the use of filter compositions can not only reduce the number of parameters required to train large scale networks, but also provide better classification performance as evidenced by our experimental results.
We identify two bottlenecks in current convolutional neural network models: computation and memory. While the most computationally expensive operations occur in the first few convolutional layers [2], the larger memory footprint is typically caused by the later, fullyconnected, layers. Here, we focus on the first bottleneck mentioned and propose a new architecture intended to speed up the first set of convolutional layers while maintaining or surpassing the original performance. Further, as a consequence of the improved learning capacity of the network, our approach indirectly alleviates the second bottleneck leading to a significant reduction in the memory footprint.
Many of the current approaches attempting to reduce the computational need have relied on the hope that learned ND filters are low rank enough such that they can be approximated by separable filters [2, 3, 4]. The main advantages of these approaches are computational cost if the filters are large and reduction in the number of parameters in convolutional layers. However, these methods require a pretrained network using complete filters and a post processing step to finetuning the network and minimize the drop in performance compared to the pretrained network.
Our approach is different from those mentioned above. We propose DecomposeMe, a novel architecture based on 1D convolutions depicted in Figure 1. This architecture introduces three main novelties. i) Our architecture relies on imposing separability as a hard constraint by directly learning 1D filter compositions. The fundamental idea behind our method is the fact that any matrix (2D tensor) can be represented as a weighted combination of separable matrices. Therefore, existing architectures can be adequately represented by composing 2D filter kernels by a combination of 1D filters (1D tensors). ii) Our proposal further improves the compactness of the model by sharing filters within the convolutional layers. In this way, the proposed network minimizes redundancy and thus further reduces the number of parameters. iii) Our proposal improves the learning capacity of the model by inserting a nonlinearity in between the 1D filter components. With this modification, the effective depth of the network increases which is intimately related to the number of linear regions available to approximate the sought after function [5]. As a result, we obtain compact models that do not require a pretrained network and minimize the computational cost and the memory footprint compared to their equivalent networks using 2D filters. Reduced memory footprint has the additional advantage of enabling larger batch sizes at train time and, therefore, computing better gradient approximations leading to, as demonstrated in our experiments, improved classification performance.
A comprehensive set of experiments on four datasets including two largescale datasets such as Places2 and ImageNet shows the capabilities of our proposal. For instance, on Places 2, compared to a VGGB model, we obtain a relative improvement in top1 classification accuracy of using fewer parameters compared to the baseline and with a speed up factor of in a forwardbackward pass. Additional experiments on stereo matching also demonstrate the general applicability of the proposed architecture.
2 Related work
In the last few years, the computer vision community has experienced the great success of deep learning. The performance of these endtoend architectures has continuously increased and outperformed traditional handcrafted systems. An essential component to their success has been the increment of data available as well as the availability of more powerful computers making possible the training of larger and more computationally demanding networks. For instance, in 2012 the AlexNet [6] model was proposed and won the ImageNet classification challenge with a network that had approximately 2.0M parameters in the convolutional layers (i.e., excluding fully connected ones). More recently, different variations of VGG models were introduced [7] of which VGG16 has over 14.5M feature parameters. VGG16 increases the depth of the model by substituting each convolutional kernel with consecutive convolutions consisting of smaller kernels while maintaining the number of filters. As an example, the VGG models as in [7] substitutes kernels with consecutive rectified layers of
kernels. This operation reduces the degrees of freedom compared to the original kernels but at the same time inserts a nonlinearity inbetween the smaller
kernels increasing the capacity of the model for partitioning the space [5]. Despite improving the classification performance, the large number of parameters not only makes the training process slow but also makes it difficult to use these models in portable devices where memory and computational resources are relevant.The growing number of applications deployed in portable devices has motivated recent efforts in speeding up deep models by reducing their complexity. A forerunner work on reducing the complexity of a neural network is the socalled network distillation method proposed in [8]. The idea behind this approach is to train a large, capable, but slow network and then refine this by taking the output of that to train a smaller one. The main strength comes from using the vast network to take care of the regularization process facilitating subsequent training operations. However, this method requires a large pretrained network to begin with which is not always feasible especially in new problem domains.
Memorywise the largest contribution comes from the fully connected layers while timewise the bottleneck is in the first convolutional layers due to a large number of multiplications (larger kernels). In this work, we address the former by simplifying the first convolutional layers. There have been several attempts to reduce the computational cost of these first layers, for example, Denil et. al. [9] proposed to learn only 5% of the parameters and predict the rest based on dictionaries. The existence of this redundancy in the network has motivated other researchers to explore linear structures within the convolutional layers [2, 10, 11] usually focusing on finding approximations to filters (lowrank filters) by adding constraints in a postlearning process. More specifically, these approaches often learn the unconstrained filter and then approximate the output using a lowrank constraint. For instance, [2] and [10] focus on improving test time by representing convolutional layers as linear combinations of a certain basis. As a result, at test time, a lower number of convolutions is needed to achieve some speeds ups with virtually no drop in performance. Liu et al [11] instead consider sparse representations of the basis rather than linear combinations. However, similar to the distillation process, the starting point of these methods is a pretrained model.
Directly related to our proposed method is [12] although it is not a convolutional neural network. In that paper, the authors aim at learning separable filters for image processing. To this end, they propose learning a filter combination reinforcing filter separability using lowrank constraints in the cost function. Their results are promising and demonstrate the benefits of learning combinations of separable filters. In contrast to that work, we work within the convolutional layers of a neural network and our filter sharing strategy is different. More importantly, we do not use soft constraints during the optimization process. Instead, we directly enforce filters to be 1D.
3 Simplifying ConvNets through Filter Compositions
In this section we present our DecomposeMe architecture. The essence of our proposal consists of decomposing the ND kernels of a traditional network into N consecutive layers of 1D kernels, see Figure 1. We consider each ND filter as a linear combination of other filters. In contrast to [12] where they seek to find these other filters by solving an optimization problem with additional lowrank constraints, we impose the filters to be 1D and learn them directly from the data. Performancewise, it turns out that such a decomposition not only mimics the behavior of the original, more complex, network but often surpasses it while being significantly more compact and experiencing a lower computational cost.
For the purpose of clarity, we will here consider 2D filters, however, the analysis is similarly applicable to the ND case. With that in mind, a typical convolutional layer can be analyzed as follows. Let denote the weights of a 2D convolutional layer where is the number of input planes, is the number of output planes (target number of feature maps) and represent the kernel size of each feature map (usually ). Let be the vector representing the bias term for each filter. Further, let us now denote as the ith kernel in the layer. Common approaches first learn these filters from data and then find lowrank approximations as a postprocessing step [10]. However, learned filters may not be separable e.g., specially those in the first convolutional layer [2, 10], and these algorithms require an additional fine tuning step to compensate drops in performance.
Instead, it is possible to relax the rank1 constraint and essentially rewrite as a linear combination of D filters [12]:
(1) 
where and are vectors of length , is a scalar weight, and is the rank of .
Based on this representation we propose DecomposeMe which is an architecture consisting of decomposed layers. Each decomposed layer represents a ND convolutional layer as a composition of 1D filters and, in addition, by including a nonlinearity inbetween (Figure 1). The ith output of a decomposed layer, , as a function of its input, , can be expressed as:
(2) 
where represents the number of filters in the intermediate layer.
is set to rectified linear unit (ReLU
[6]) in our experiments.Decomposed layers have two major properties, intrinsically low computational cost, and simplicity. Computational cost: Decomposed layers are represented with a reduced number of parameters compared to their original counterparts. This is an immediate consequence of two important concepts: the direct use of 1D filters and the sharing scheme across a convolutional layer leading to greater computational cost savings, especially for large kernel sizes. Simplicity: Decomposed architectures are deeper but simpler structures. Decomposed layers are based on filter compositions and therefore lead to smoother (simpler) 2D equivalent filters that help during training by acting as a regularizing factor [2]. Moreover, decomposed layers include a nonlinearity inbetween convolutions increasing the effective depth of the model. As a direct consequence, the upper bound of the number of linear regions available is increased [13]
. Evident from our results, decomposed layers learn faster, as in per epoch, than equivalent 2D convolutional layers. This suggests that the simplicity of the decomposed layers not only reduces the number of parameters but also benefits the training process.
Converting existing structures to decomposed ones is a straight forward process as each existing ND convolutional layer can systematically be decomposed into sets of consecutive layers consisting of D linearly rectified kernels and D transposed kernels as shown in Figure 1. In the next section we apply decompositions to two wellknown computer vision problems such as image classification and stereo matching.
3.1 Complexity Analysis
We analyze the theoretical speed up of the proposed method as follows: Consider as the baseline a convolutional layer of dimensions with filters of spatial size of . Without loss of generality, we can assume . This baseline is then decomposed into two consecutive layers and with filter size and respectively. The computational cost of these two schemes is proportional to and respectively. Therefore, considerable improvements are achieved when . The analysis of this expression reveals that, although, is larger in the first layer (e.g., for AlexNet [14]), is usually too small compared to to make a significant difference (e.g., for RGB images). Current architectures tend to have a large number of filters in later layers. For instance, consider a VGG model using kernels of size , consecutive layers of equal size (e.g., ), and maintaining the number of output filters through the decomposed layer (). In that case, the theoretical improvement of our method is given by vs. .
(a)  (b)  (c) 
4 Experiments
We conduct two sets of experiments representing different use cases to validate our proposal. Firstly, we run experiments performing image classification. More specifically, we test four wellknown network architectures, namely LeNet [15], CIFAR10 quick [16] and AlexNet [6] and VGG [7], on three publicly available datasets; MNIST [17], CIFAR10 [18] and ImageNet [19]. An additional experiment is included on the challenging Places2 dataset [20]. Secondly, we run experiments performing stereo matching to show the generic learning capabilities and applicability of our proposal. To this end, we consider a stateoftheart stereo matching problem and replace the existing, accurate, network of [21] with our decomposed architecture. This set of experiments is carried out on the KITTI benchmark [22].
4.1 Image Classification
All the experiments on image classification are conducted on a Dual Xeon 8core E52650 with 128GB of RAM using two Kepler Tesla K20 GPUs in parallel, unless otherwise specified. We use the torch7 framework
[23] and largescale experiments are carried out using the multiGPU implementation available in [24]. Learning rate, weight decay and momentum were set to the default values. More precisely, we start with a learning rate of which is decreased when the training error plateaus; weight decay is set to and momentum to. Again, unless otherwise specified, we use the same hyperparameter setup as in the original experiments. Data augmentation is done through random crops where necessary and random horizontal flips with probability
. Please note that other training approaches may use different data augmentation techniques such as color augmentation [6]. For a fair comparison, we select the original networks as baselines and all models including baselines are trained from scratch on the same computer using the same seed and the same framework.A basic decomposed layer consists of vertical kernels followed by horizontal ones, and nonlinearities inbetween 1D convolutions are set to rectifier linear units (ReLU). We evaluate different instances of this model referred to as DecomposeMe where the subindex is the number of layers being decomposed (Figure 2a), and the superindex indicates variations in the composition of the layer such as kernel size, the nonlinearity being used or the order of the kernels. Decompositions respect the size of the filter in the original model, and the number of output filters from the convolutional layer is maintained. Layers that are not decomposed are left as in the original model. For specific experiments we show results for variations within each of these instances.
Avg. (%) #Params^{1}^{1}1Total number of parameters. ^{2}^{2}2 is the largest kernel size in the network. LeNet Baseline (retrain) 99.1 52.0K 5 DecomposeMe 99.2 53.8K 5 DecomposeMe 99.2 25.6K 5 DecomposeMe 99.3 22.0K 5 LeNet 99.2 53.8K 9 DecomposeMe 99.4 22.0K 9  
(a)  (b) 
4.1.1 MNIST and CIFAR10
As a sanity check, we first run experiments on the MNIST and CIFAR10 datasets.
MNIST [17] is a database of handwritten digits, consisting of a training set of images and a test set of images. All digits in the database have been sizenormalized and centered in a fixedsize image. For this experiment we consider the LeNet model proposed in [15] consisting of two convolutional layers with
kernels, each one followed by maxpooling layers and hyperbolic tangents as nonlinear layers, and two fully connected layers. We first gradually substitute convolutional layers for decomposed layers maintaining the number of output filters (referred to as DecomposeMe
since this model keeps the hyperbolic tangent between convolutional layers). Then, we conduct an additional experiment setting the nonlinearities between the convolutional layers to rectified linear units. In this case, we also consider a larger kernel size of referred to as DecomposeMe.Figure 3 summarizes the results for the baseline together with four instances of our proposal. As shown, decompositions systematically outperform the baseline and, when multiple layers are decomposed, significantly reduce the number of parameters in the network. In addition, the performance improves for larger kernel sizes in the first layer. Performance curves for these and additional instances with different filter compositions or excluding the nonlinearity inbetween decomposed layers are shown in Figure 3a. As shown by DecomposeMe, adding the nonlinearity is necessary. Looking at the graph one can see that the structure without nonlinearity learns adequately at the beginning and then performance drops drastically after a few iterations. Large scale experiments presented in the next section will also confirm the need of a nonlinearity in between 1D convolutions. More importantly, as a consequence of the reduced number of parameters, the gap between training and testing accuracy decreases when using decomposed layers which indicates that the structures are less prone to overfitting. This is evident for instance in the th epoch where our proposed method provides similar test accuracy to the baseline, however, the training data accuracy is significantly larger for the baseline, see Figure 3.
CIFAR10 [18] is a database consisting of training and testing RGB images with a resolution of pixels split into classes. We consider the CIFAR10 quick model consisting of convolutional layers with kernels of size [16].
Top1 #Params CIFAR10 quick Baseline (retrained) 83.8 5M 5 DecomposeMe 84.2 4.7M 5 DecomposeMe 84.4 3.6M 5 DecomposeMe 83.5 2.4M 5 DecomposeMe 81.2 800K 5  
(a)  (b) 
Figure 4 summarizes the results for the baseline together with an instance of our structure decomposing one layer followed by three instances decomposing all convolutional layers with different inbetween 1D convolutions. As shown, decomposing a single layer provides a slight increment in performance while maintaining the number of parameters. Decomposing additional layers reduces the number of parameters considerably while there is only a slight drop in performance when the reduction is over . We have also experimented with different configurations regarding the decomposition –such as horizontal kernels followed by vertical ones and vice versa, or the combination of both– to verify that the learning process is able to deal with different types of signals. Figure 4a shows learning curves for the baseline versus our structure with three decomposed layers varying the filter composition: vertical convolution followed by a horizontal one and vice versa (referred to as Decomposed). As we can see in these plots, decomposed layers provide a smaller gap between training and testing accuracy and thus reduce overfitting while maintaining performance. For instance, after epochs, all the structures provide the same test performance but the training performance of the baseline is higher. These and additional empirical results (not reported) show that the performance is invariant to permutations of the order of the tensors and that there is no significant benefit in combining two types of configurations. Similarly, we have also experimented with substituting the basic architecture for a Network in Network [25] implementation which renders similar benefits when only the first layer is decomposed. In that case, our architecture achieves an increment of in performance.
4.1.2 LargeScale Experiments: ImageNet and Places2
Datasets. We now focus on two largescale datasets: ImageNet [26] and Places2 [20]. ImageNet is a largescale dataset with over million labelled images split into categories. We used the ILSVRC2012 [19] subset of images consisting of 1.2 million images for training and images for validation. Places2 [20] is a largescale dataset created specifically for training systems targeting highlevel visual understanding [20] tasks. This dataset consists of more than million training images with unique scene categories and 20000 images for validation. The database comprises between 5000 and 30000 training images per category which is consistent with realworld frequencies of occurrence.
Deep Models. We consider two network structures: the AlexNetOWTBn in [14] and the Bnet in [7](VGGB). AlexNetOWTBn is the ”one weird trick” variation (OWT) of AlexNet [6]
where we adopt batch normalization (Bn) after each convolutional layer
[27]. Bnet [7] is the B version of the VGG structure and consists of convolutional layers with maxpooling every two of these convolutions. We consider decompositions in each of those layers, reducing the number of kernels where appropriate. For Bnet models, we consider two types of weight initialization: Xavier [28] (referred to as DecomposeMe) and Kaiming [29] which we adopt as default configuration since we obtained slightly better results in this case. In both cases, bias terms were set to . Models were trained for a total of epochs with batches per epoch and a batch size of and for AlexNetOWTBn and Bnet respectively.



(a)  (b) 
Network Analysis. We analyze several modifications of the models to better understand the contribution of the proposed approach. First, we study the effect of including nonlinearities inbetween convolutional layers and different types of filter compositions such as horizontal kernels followed by vertical ones and vice versa, or the combination of both. Second, following the trend of recent architectures [31, 32, 33] we remove intermediate fully connected layers of the models to compare the performance of the convolutional layers. These compact models solely include a fully connected layer to produce the desired number of outputs ( and neurons in ImageNet and Places2 respectively). Figure 2b shows a comparison between original and compact models. As a direct consequence of removing fully connected layers, the number of parameters drops drastically. For comprehensive comparison we also train and report results for the baselines models in their compact form. Compact models do not use DropOut [34].
(a)  (b) 
(c)  (d) 
Evaluation. We measure classification performance as the top1 accuracy on the validation set using the center crop, named Top1. We also provide trainingvalidation accuracy gap plots, named Trainval gap. This plot demonstrates the evolution of the difference between train and validation accuracy as the training proceeds [35]. Overfit models tend to produce a high (positive) gap while underfit models tend to have a similar performance and, therefore, produce a low train validation accuracy gap.
Experimental results. A summary of the results is listed in Table 1a and Table 1b for ImageNet and Places2 respectively. Training plots for selected instances of AlexNetOWTBn and BNet are shown in Figure 5 and in Figure 6 for ImageNet and Places2 respectively. As shown in Table 1a, for AlexNetOWTBn, the number of parameters is only reduced when more than one layer is decomposed. This is expected since, in the AlexNetOWTBn structure, each decomposed layer introduces an additional convolutional layer (see the complexity analysis in Sect. 4.2). However, despite the slightly larger number of parameters, there is an increment in performance. Empirically, we find similar results when a single layer on OverFeat [1] is decomposed. In that case, there is a performance increment of with respect to the baseline. This suggests that simplified kernels (compositions of 1D kernels) actually help during the training process and the effective capacity of the models increases with the additional nonlinear layers.
More substantial changes occur when additional layers are decomposed. The network is then able to produce better results and at the same time reduce the amount of parameters being used. The reduction in the number of parameters is even more substantial when the third layer is decomposed. In this case, the model is still able to perform better than the baseline using only of the parameters with respect to the baseline. These results suggest that simplifying ConvNets using the proposed decomposition method not only reduces the amount of parameters required but also outperforms equivalent models learning the complete filter. As in the MNIST experiments, we see no significant difference between variations in the composition of the filters such as horizontal kernels followed by vertical ones (referred to as DecomposeMe). Therefore, we select vertical kernels followed by horizontal ones as the default choice which leads to computational benefits due to memory alignment.
Training curves comparing the effect of including nonlinearities inbetween decomposed layers are shown in Figure 5a (referred to as DecomposeMe and DecomposeMe). As shown, models including nonlinearities outperform their equivalent not using rectified kernels independently of the number of decomposed layers. These results suggests that the additional nonlinearity inbetween each decomposed layer increases the effective capacity of the structure.
(a)  (b) 
(c)  (d) 
Interestingly, we can also see in sub figures and of Figure 5 and Figure 6 that decomposed layers consistently produce training curves with a smaller gap between training and validation accuracy. From these results, we can infer that lowrank filters help in the regularization process during training. These results are in line with the conclusions drawn in [36]. From these results, we can conclude that our proposed method is less prone to overfitting measured as the gap between training and validation accuracy.
We now focus on results obtained using compact networks (referred to using ). First, in Figure 5a and Figure 6a we can see that compact instances of AlexNetOWTBn using decomposed layers outperform their equivalent using fully connected layers. For ImageNet (Figure 5a), compact versions provide competitive results compared to the (full) baseline. Compact Bnet models on ImageNet provide slightly lower performance than their equivalent full models as shon in Figure 5b. Nevertheless, the drop of performance is negligible. For these compact models we also observe in Figure 5c that the gap between training and validation accuracy is negative during most of the training process and, therefore, suggests that these models are too small for this particular dataset. The behavior of decomposed versions of the Bnet structure on Places2 is different as shown in Figure 6b and Figure 6d. As summarized in Table 1b, all models provide similar performance on this dataset. These results suggest that 2D filters are, in fact, suboptimal layers that need additional fully connected layers to improve performance. Compared to the baselines, compact models lead to an even more significant reduction in the total number of parameters, see Table 1. From these results, we can conclude that using 1D convolution layers not only reduces the number of operations and parameters, but also provides competitive (or better) performance compared to stateoftheart methods.
Model Forward Time Total Time AlexNetOWTBn AlexNetOWTBn [37] 22.45 69.04 AlexNetOWTBn 19.98 58.11 DecomposeMe 28.90 90.17 DecomposeMe 28.38 88.59 DecomposeMe 27.79 83.66 BNet BNet [20] 140.45 560.70 BNet 135.13 535.05 DecomposeMe 56.19 271.52 DecomposeMe 51.87 252.50 DecomposeMe 63.89 289.20 DecomposeMe 47.02 226.53 DecomposeMe 38.54 130.13  
(a)  (b) 
The significant reduction in the number of parameters and memory footprint has not only benefits at test time. During training, these compact models make a better use of resources available. For instance, it is possible to increase the batch size to improve the estimation of gradients and, therefore, leverage larger amounts of data. The bottom line of Table
1b shows one additional instance of our method with larger number of decompositions trained with a batch size of , referred to as DecomposeMe. Please, note that this was not feasible using the baselines. As we can see, the number of parameters of this model is significantly lower than the baseline (e.g., 92% reduction) and, more importantly, there is a significant improvement in accuracy and computational complexity as we will see in Section 4.2.4.2 Complexity analysis
Figure 7a shows the empirical computational costs of 2D convolutional layers (baselines) and decomposed layers for different representative layers. The plot represents the total time required in a forwardbackward pass as a function of . For the baseline, we report the time required solely for the convolution while for decomposed layers we report the combination of 1D convolution, nonlinear layer, and 1D convolution. As we can see, the first layer does not produce any benefits timewise. However, the significant reduction in time occurs for subsequent layers especially for using kernel sizes larger than . As shown, a more substantial reduction is achieved when is similar to the number of input filters.
Empirical costs for baselines and instances of decompositions used in our experiments are summarized in Figure 7b. As expected, we can observe that the amount of time spent during fully connected layers is not meaningful compared to the time required by convolutional layers (see the comparison between AlexNetOWTBn and AlexNetOWTBn). Besides, substantial savings occur for instances of BNet models where pairs of layers are decomposed and, therefore, maintaining the number of layers.
A fair comparison with existing lowrank approximation methods [2, 4] is difficult as they require a fully pretrained network to initialize their methods and, they need a finetuning process to prevent significant drops in performance. Contrary to them, our method is trained directly from data using a standard initialization. Compared to [2], for ImageNet considering AlexNetOWTBn as a similar network architecture (four convolutional layers and three fully connected layers), we obtain an increment in the top1 performance of with a reduction in the number of weights. Our result is significantly better than the reduction in the number of weights with an increment in error (top5) of reported in [2]. Best results reported in [4] are a speedup of with no loss in accuracy and a with a drop of in classification accuracy on ICDAR2003. In our case, our best result is on Places2 where we achieve a speedup in forward time with a reduction in the number of parameters and an increment in top1 classification accuracy of .
4.3 Stereo Matching
The purpose of this experiment is to further demonstrate the applicability of our method when converting existing complex architectures. Accomplishing this, we address the problem of computing the disparity for each pixel in an image given a stereo pair of images. In particular, we use the recent method proposed in [21] where Zbontar et al.
propose a ConvNet that matches patches in a stereo pair. The architecture consists of two feature extraction models, one per image and whose output serves as input for learning the matching network
[21]. The entire process is learned in an endtoend fashion and provides stateoftheart results on KITTI2012 [38].In this experiment, we focus on converting the feature extraction models to decomposed ones. These modules use four consecutive convolutional layers with kernels of size with rectified linear units following each layer. Demonstrating the versatility of our architecture we test two different decompositions as outlined in Figure 2c. Firstly, we pair every two convolutional layers and transform them into a decomposed one. Secondly, we consider a unique decomposition that compacts the four layers into a decomposed one using larger kernels of size . Therefore, both decompositions leverage the same neighborhood of size in the input feature map which is the equivalent to four consecutive convolutions of in the original model. Table 2 summarizes the results for these models. We show the numbers after retraining the original network (and, therefore, all randomization is equivalent). Table 2 includes the run time for the complete process including the matching network as well as the time required to extract features which is the focus of this experiment. As we can see, our approach significantly reduces the time required to extract features from each image. Nevertheless, this has almost no impact in the overall time which is consistent with the original paper [21], as the feature part is not responsible for the majority of the computational cost. More importantly, our proposed method achieves almost the same performance with a significant reduction in the number of parameters.
Finally, Table 2b summarizes benchmarking results on KITTI dataset [38]. Our proposal provides similar results compared to the original network using only of the parameters in the feature layers. These are relevant results since our proposed method, without a custom design, can reach similar performance compared to a deep model that was carefully engineered. More importantly, this is achieved using only a fraction of the number of parameters.
5 Conclusions
In this paper we proposed DecomposeMe. A novel and efficient convolutional neural network architecture based on 1D convolutions. Experiments on largescale image classification show that our approach improves the classification accuracy while significantly reducing the number of parameters and computational cost. For instance, on Places2 and compared to the VGGB model, our architecture obtains a relative improvement in top1 classification accuracy of using fewer parameters than VGGB and with a speed up factor in forwardtime of . Additional experiments on stereo matching also demonstrate the general applicability of the proposed architecture.
Acknowledgment The authors thank John Taylor for helpful discussions and continuous support through using the CSIRO highperformance computing facilities. The authors also thank NVIDIA for generous hardware donations.
References
 [1] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” in International Conference on Learning Representations (ICLR2014). CBLS, April 2014.
 [2] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in NIPS, 2014, pp. 1269–1277.
 [3] A. Sironi, B. Tekin, R. Rigamonti, V. Lepetit, and P. Fua, “Learning separable filters,” PAMI, vol. 37, no. 1, pp. 94 – 106, 2015.
 [4] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” in British Machine Vision Conference, 2014.
 [5] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, “On the number of linear regions of deep neural networks,” in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, Eds., 2014, pp. 2924–2932.
 [6] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 2012.
 [7] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” CoRR, vol. abs/1409.1556, 2014.
 [8] V. O. Hinton, G. E. and J. Dean, “Distilling the knowledge in a neural network.” in arXiv, 2014.
 [9] M. Denil, B. Shakibi, L. Dinh, M. Ranzato, and N. de Freitas, “Predicting parameters in deep learning,” CoRR, vol. abs/1306.0543, 2013.
 [10] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” vol. abs/1405.3866, 2014.

[11]
B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Penksy, “Sparse convolutional
neural networks,” in
IEEE Conference on Computer Vision and Pattern Recognition
, 2015.  [12] R. Rigamonti, A. Sironi, V. Lepetit, and P. Fua, “Learning separable filters,” in Conference on Computer Vision and Pattern Recognition, 2013.
 [13] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in NIPS, 2014, pp. 1269–1277.
 [14] A. Krizhevsky, “One weird trick for parallelizing convolutional neural networks,” CoRR, vol. abs/1404.5997, 2014. [Online]. Available: http://arxiv.org/abs/1404.5997
 [15] Y. LeCun, “Lenet5, convolutional neural networks,” 2015. [Online]. Available: http://yann.lecun.com/exdb/lenet/

[16]
J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in
Advances in Neural Information Processing Systems, 2012.  [17] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010.
 [18] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Technical Report, 2009.
 [19] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F.F. Li, “Imagenet large scale visual recognition challenge.” CoRR, vol. abs/1409.0575, 2014.

[20]
B. Zhou, A. Khosla, A. Lapedriza, A. Torralba, and A. Oliva, “Places2: A largescale database for scene understanding,” 2015.
 [21] J. Zbontar and Y. LeCun, “Stereo matching by training a convolutional neural network to compare image patches,” CoRR, vol. abs/1510.05970, 2015.
 [22] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
 [23] R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlablike environment for machine learning,” in BigLearn, NIPS Workshop, 2011.
 [24] GitHub. (2015) soumith/imagenetmultigpu.torch. [Online]. Available: https://github.com/soumith/imagenetmultiGPU.torch
 [25] M. Lin, Q. Chen, and S. Yan, “Network in network,” CoRR, vol. abs/1312.4400, 2013.
 [26] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and F.F. Li, “Imagenet: A largescale hierarchical image database.” in CVPR, 2009, pp. 248–255.
 [27] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” CoRR, 2015.

[28]
X. Glorot and Y. Bengio, “Understanding the difficulty of training deep
feedforward neural networks,” in
JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010)
, vol. 9, May 2010, pp. 249–256.  [29] S. R. Kaiming He, Xiangyu Zhang and J. Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification.” in ICCV, 2015.
 [30] A. Vedaldi and K. Lenc, “Matconvnet  convolutional neural networks for MATLAB,” CoRR, vol. abs/1412.4564, 2014.
 [31] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Googlelenet: Going deeper with convolutions,” in Computer Vision and Pattern Recognition, vol. 2, June 2015.
 [32] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” CoRR, vol. abs/1512.03385, 2015. [Online]. Available: http://arxiv.org/abs/1512.03385
 [33] in ICLR workshops.
 [34] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Improving neural networks by preventing coadaptation of feature detectors,” CoRR, vol. abs/1207.0580, 2012. [Online]. Available: http://arxiv.org/abs/1207.0580
 [35] R. G. L. Z. Michael Cogswell, Faruk Ahmed and D. Batra, “Reducing overfitting in deep networks by decorrelating representations.” in ICLR, 2016.
 [36] E. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in NIPS, 2014.
 [37] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
 [38] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” International Journal of Robotics Research (IJRR), 2013.
Comments
There are no comments yet.