I Introduction
In recent years, deep neural network architectures have excelled in several application domains, ranging from machine vision [1, 2, 3]
[4, 5] to biomedical [6, 7] and financial data analysis [8, 9]. Of those important developments, Convolutional Neural Network (CNN) has evolved as a main workhorse in solving computer vision tasks nowadays. The architecture was originally developed in the 1990s for handwritten character recognition using only two convolutional layers
[10]. Over the years, with the development of Graphical Processing Units (GPUs) and efficient implementation of convolution operation, the depth of CNNs has been increased to tackle more complicated problems. Nowadays, prominent architectures such as Residual Network (ResNet) [11] or Google Inception [12] with hundreds of layers have become saturated. Researchers started to wonder whether millions of parameters are essential to achieve such performance. In order to extend the benefit of such deep nets to embedded devices with limited computation power and memory, recent works have focused on reducing the memory footprint and computation of a pretrained network, i.e. they apply network compression in the posttraining stage. In fact, recent works have shown that traditional network architectures such as Alexnet, VGG or Inception are highly redundant structures [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. For example, in [13]a simple heuristic based on magnitude of the weights was employed to eliminate the connections in a pretrained network, which achieved considerable amount of compression without hurting the performance much. Additionally, representing network parameters with low bitwidth numbers, like in
[23, 24, 25], has shown that the performance of a 32bit network can be closely retained with only 4bit representations. It should be noted that the two approaches are complementary to each other. In fact, a compression pipeline called “Deep Compression” [13] which consists of three compression procedures, i.e. weight pruning, weight quantization and Huffmanbased weight encoding, achieved excellent compression performance on AlexNet and VGG16 architectures.Along pruning and quantization, lowrank approximation of both convolutional layers and fully connected layers was also employed to achieve computational speed up [26, 27, 28]
. Viewed as highorder tensors, convolutional layers were decomposed using traditional tensor decomposition methods, such as CP decomposition
[21, 20, 29] or Tucker decomposition [30], and the convolution operation is approximated by applying consecutive 1D convolutions.Overall, efforts to remove redundancy in already trained neural networks have shown promising results by determining networks with a much simpler structure. The results naturally pose the following question: why should we compress an already trained network and not seek for a compact network representation that can be trained from scratch?. Subsequently, one could of course exploit the above mentioned compression techniques to further decrease the cost. Under this perspective, the works in [19, 22] utilizing a lowrank approximation approach were among the first to report simplified network structures.
The success of Convolutional Neural Networks can be attributed to four important design principles: sparse connectivity, parameter sharing, pooling and multilayer structure. Sparse connectivity (in convolutional layers) only allows local interaction between input neurons and output neurons. This design principle comes from the fact that in many natural data modalities such as images and videos local/neighboring values are often highly correlated. These groups of local values usually contain certain distinctive patterns, e.g. edges and color blobs, in images. Parameter sharing mechanism in CNNs enables the underlying model to learn location invariant cues. In other words, by sliding the filters over the input, the patterns can be detected regardless of the location. Pooling and multilayer structure design of deep neural networks in general and CNN in particular, captures the compositional hierarchies embedded within many natural signals. For example, in facial images, lower level cues such as edges, color and texture patterns form discriminative higher level cues of facial parts, like nose, eyes or lips. Similar compositional structure can be seen in speech or text, which are composed of phonemes, syllables, words and sentences. Although the particular structure of a deep network has evolved over time, the above important design principles remain unchanged. At the core of any convolution layer, each filter with
elements operates as a micro feature extractor that performs linear projection of each data patch/volume from a feature space of dimensions to a real value. In order to enhance the discrimination power of this micro feature extractor, the authors of [31]proposed to replace the GLM model by a general nonlinear function approximator, particularly the multilayer perceptron (MLP). The resulting architecture was dubbed Network in Network (NiN) since it consists of micro networks that perform the feature extractor functionality instead of simple linear projection.
In this paper, instead of seeking a more complex feature extractor, we propose to replace the linear projection of the traditional CNN by multilinear projection in the pursuit of simplicity. There has been a great effort to extend traditional linear methods to multilinear ones in an attempt to directly learn from the natural representation of the data as high order tensors [32, 33, 34, 35, 36, 37]. The beauty of multilinear techniques lies in the property that the input tensor is projected simultaneously in each tensor mode, allowing only certain connections between the input dimensions and output dimensions, hence greatly reducing the number of parameters. Previous works on multilinear discriminant learning and multilinear regression [38, 32, 33, 34, 35, 36] have shown competitive results of multilinearbased techniques. The proposed architecture still inherits the four fundamental design properties of a traditional deep network while utilizing multilinear projection as a generic feature extractor. The complexity of each feature extractor can be easily controlled through the “rank” hyperparameter. Besides a fast computation scheme when the network is compact, we also propose an alternative computation method that allows efficient computation when complexity increases.
The contribution of our paper can be summarized as follows:

We propose a generic feature extractor that performs multilinear mapping to replace the conventional linear filters in CNNs. The complexity of each individual feature extractor can be easily controlled via its rank, which is a hyperparameter of the method. By having the ability to adjust individual filter’s complexity, the complexity of the entire network can be adjusted without the need of increasing the number of filters in a layer, i.e. the width of the layer. Since the proposed mapping is differentiable, the entire network can be easily trained endtoend by using any gradient descentbased training process.

We provide the analysis of computation and memory requirements of the proposed structure. In addition, based on the properties of the proposed mapping, we propose two efficient computation strategies leading to two different complexity settings.

The theoretical analysis of the proposed approach is supported by experimental results in realworld classification problems, in comparison with CNN and the lowrank scheme in [19].
The remainder of the paper is organized as follows: In section 2, we provide an overview of the related works focusing on designing compact network structures. Section 3 gives the necessary notations and definitions before presenting the proposed structure and its analysis. In section 4, we provide details of our experiment procedures, results and quantitative analysis. Section 5 concludes our work and discusses possible future extensions.
Ii Related Work
Research focusing on the design of a less redundant network architecture has gained much more attention recently. One of the prominent design pattern is the bottleneck unit which was first introduced in the ResNet architecture [11]. The bottleneck pattern is formed by two convolution layers with some convolution layers in between. The first convolution layer is used to reduce the number of input feature maps while the latter is used to restore the number of output feature maps. Several works such as [39, 40, 41] have incorporated the bottleneck units into their network structure to reduce computation and memory consumed. Recently MobileNet architecture [42] was proposed which replaced normal convolution operation by depthwise separable convolution layers. Constituted by depthwise convolution and pointwise convolution, the depthwise separable convolution layer performs the filtering and combining steps independently. The resulting structure is many times more efficient in terms of memory and computation. It should be noted that bottleneck design or depthwise separable convolution layer is a design on a macro level of the network structure in which the arrangements of layers are investigated to reduce computation.
On a micro level, the works in [19] and [22]
assumed a low rank structure of convolutional kernels in order to derive a compact network structure. In fact, low rank assumption has been incorporated into several designs prior to deep neural networks, such as dictionary learning, wavelet transform of high dimensional data. The first incorporation of low rank assumption in neural network compression was proposed in
[21, 29, 20]. In [29], CP decomposition was proposed to decompose the entire 4D convolutional layer into four 1D convolutions. Although the effective depth of the network remains the same, replacing one convolution operation by four can potentially lead to difficulty in training the network from scratch. With a carefully designed initialization scheme, the work of [22] was able to train a mixture of lowrank filters from scratch with competitive performances. Improving on the idea of [29], a different lowrank structure that allows both approximating an already trained network and training from scratch was proposed in [19]. Specifically, let us denote a convolution layer of kernels by , where and are the number of input feature maps and spatial size of the kernel, respectively. [19] proposed to approximate using a vertical kernel and a horizontal kernel . The approximation is in the following form:(1) 
where the superscript and subscript denote the index of the channel and the kernel respectively. is a hyperparameter controlling the rank of the matrix approximation. Here is just the 2D kernel weight of the th filter applied to the th channel of the input feature map; and are just
dimensional vectors.
As can be seen from (1), the authors simplify a convolutional layer by two types of parameter sharing. The first is the sharing of right singular vectors () across all input channels within the th filter while the second enforces the sharing of left singular vectors () across all filters. The work in [19]
is closely related to ours since we avoid designing a particular initialization scheme by including a Batch Normalization step
[43]. The resulting structure was easily trained from scratch with different network configurations.Iii Proposed Method
We start this section by introducing some notations and definitions related to our work. We denote scalar values by either lowcase or uppercase characters , vectors by lowcase boldface characters , matrices by uppercase boldface characters and tensors by calligraphic capital characters . A tensor is a multilinear matrix with modes, and is defined as , where denotes the dimension in mode. The entry in the th index in mode for is denoted as .
Iiia Multilinear Algebra Concepts
Definition 1 (Mode Fiber and Mode Unfolding)
The mode fiber of a tensor is a vector of dimensional, given by fixing every index but . The mode unfolding of , also known as mode matricization, transforms the tensor to matrix , which is formed by arranging the mode fibers as columns. The shape of is with .
Definition 2 (Mode Product)
The mode product between a tensor and a matrix is another tensor of size and denoted by . The element of is defined as .
For convenience, we denote by .
One of the nice properties of mode product is that the result of the projection does not depend on the order of projection, i.e.
(2) 
The above property allows efficient computation of the projection by selecting the order of computation.
IiiB Multilinear filter as generic feature extractor
Let and denote the input patch centered at spatial location and the convolution kernel respectively. At the core of a classic CNN, each convolution kernel operates as a feature extractor sliding through the input tensor to generate a higher level representation. Specifically, the kernel performs the following linear mapping:
(3) 
where and denotes the response at and the intercept term respectively. denotes the dotproduct between two tensors. After the above linear projection, a nonlinearity is applied to
using the layer’s activation function.
We propose to replace the above linear projection by the following multilinear mapping:
(4) 
where is the rank hyperparameter of the projection and denotes the projection along mode. In our case, .
Since the mapping in Eq. (4) operates on similar input patch and yields a scalar response as a linear mapping does in CNNs, the proposed multilinear mapping acts as a generic feature extractor and can be incorporated into any design of the CNN topology, such as AlexNet [44], VGG [45], Inception [41] or ResNet [11]. In addition, since the mapping in Eq. (4) is differentiable with respect to each individual weight vector , the resulting network architecture can be trained in an endtoend fashion by back propagation algorithm. We hereby denote the layer employing our proposed multilinear mapping as MLconv.
Recently, mode multiplication has been introduced as a tensor contraction layer in [46] to project the entire input layer as a highorder tensor to another tensor. This is fundamentally different from our approach since the tensor contraction layer is a global mapping which does not incorporate sparse connectivity and parameter sharing principles. In general, mode multiplication can be applied to an input patch/volume to output another tensor instead of a scalar as in our proposal. We restrict the multilinear projection in the form of Eq. (4) to avoid the increase in the output dimension which leads to computation overhead in the next layer. Moreover, tensor unfolding operation required to perform the multilinear projection that transforms a tensor to another tensor will potentially increase the computation. On the contrary, our proposed mapping is a special case of the general multilinear mapping using mode product in which the output tensor degenerates to a scalar. This special case allows efficient computation of the projection, as shown in the next section.
IiiC Memory and Computation Complexity
One the most obvious advantages of the mapping in Eq. (4
) is that it requires far fewer parameters to estimate the model, compared to the linear mapping in a CNN. In a CNN utilizing the mapping in Eq. (
3), a layer with kernels requires the storage of parameters. On the other hand, a similar layer configuration with mappings utilizing the projection in Eq. (4) requires only parameters. The gain ratio is:(5) 
As compared to a similar CNN topology, the memory reduction utilizing the mapping in Eq. (4) varies for different layers. The case where (which is the usual case) leads to a gain ratio approximately equal to . In our experiments, we have seen that with and in all layers, memory reduction is approximately , while having competitive performance compared to a CNN with similar network topology.
Let us denote by and the input and kernels of the th convolutional layer having input feature maps and
output feature maps. In addition, we assume zeropadding and sliding window with stride of
. By using linear projection as in case of CNN, the computational complexity of this layer is . Before evaluating the computational cost of a layer using the proposed method, it should be noted that the projection in Eq. (4) can be efficiently computed by applying three consecutive convolution operations. Details of the convolution operations depend on the order of three modes. Therefore, although the result of the mapping in Eq. (4) is independent of the order of mode projection, the computational cost actually depends on the order of projections. For , it is computationally more efficient to first perform the projection in mode in order to reduce the number of input feature maps for subsequent mode and mode projection:(6) 
The response in Eq. (6) is the summation of independent projections with each projection corresponding to the following three consecutive steps, as illustrated in Figure 1:

Projection of along the third mode which is the linear combination of input feature maps. The result is a tensor of size .

Projection of along the first mode which is the linear combination of rows. The result is a tensor of size .

Projection of along the second mode which is the linear combination of elements.
With the aforementioned configuration of the th layer, the computational complexity of the th MLconv layer utilizing our multilinear mapping is as follows:

Mode projection that corresponds to applying convolutions to the input with kernels of size elements, having computational complexity of . The output of the projection along the third mode is a tensor of size .

Mode projection is equivalent to applying convolution with one separable convolution kernel, having complexity of . This results in a tensor of size .

Mode, similar to mode projection, can be computed by applying convolution with one separable convolution kernel, requiring computation. This results in a tensor of size . By summing over ranks, we arrive at the output of layer of size .
The total complexity of layer using our proposed mapping is thus . Compared to linear mapping, our method achieves computational gain of:
(7) 
IiiD Initialization with pretrained CNN
The proposed mapping in Eq. (4) can be viewed as a constrained form of convolution kernel as follows:
(8) 
where is expressed in Kruskal form as the outerproduct of the corresponding projection in three modes. By calculating using mode product definition as in Eq. (4) and using dotproduct as in Eq. (8), the equivalance of Eq. (4) and Eq. (8) can be found [47].
Consequently, a convolution layer can be converted to an MLconv layer by decomposing each convolution filter into Kruskal form using any CP decomposition method [47]. It should be noted here that, since there is no closedform solution of the CP decomposition, such a conversion corresponds to an approximation step. Under this perspective, a pretrained CNN can be used to initialize our network structure to speed up the training process. However, as we will show in the experimental section, random initialization of multilinear filters can lead to better performance.
In addition to an initialization scheme, Eq. (8) also complements our proposed mapping with an efficient computation strategy when is large. The computation cost discussed in the previous subsection depends linearly with parameter . When is large, it is more efficient to compute the mapping according to Eq. (8) by first calculating and then convolving the input with . The computational complexity of the first step is while for the convolution step is , resulting to an overall complexity of for the entire layer. The ratio between normal convolution layer and MLconv layer using this computation strategy is:
(9) 
It is clear that is usually much larger than , therefore, the increase in computation as compared to normal convolution is marginal. Following this calculation strategy, a rank network is marginally slower than a rank network or a CNN. This will be demonstrated in our experiment section. In conclusion, the computation method discussed in this subsection allows the scalability of our proposed mapping when is large while previous subsection proposes an efficient computation scheme that allows computation savings when is small. Overall, we can conclude that the computation of the proposed layer structure is efficient while, as will be shown in the experimental evaluation, changing the rank of the adopted tensor definitions can increase performance.
Iv Experiments
In this section, we provide experimental results to support the theoretical analysis in section III. The experimental protocol and datasets are described first, followed by the discussion of the experimental results.
Iva Network Topology
Traditional CNN topology consists of two modules: feature extractor module and classifier module. Several convolution and pooling layers stacked on top of each other act as feature extractor while one or two fullyconnected layers act as the classifier. In order to evaluate the effectiveness of the proposed multilinear filter, we constructed the network architecture with only feature extractor layers, i.e. convolution layer or MLconv layer together with pooling layer while skipping fullyconnected layer. As the name suggests, fullyconnected layer has dense connections, accounting for large number of parameters in the network while being prone to overfitting. Moreover, a powerful and effective feature extractor module is expected to produce a highly discriminative latent space in which the classification task is made simple. Such fullyconvolutional networks have attracted much attention lately due to their compactness and excellent performance in imagerelated problems like semantic segmentation, object localization and classification
[31, 2, 48]The configuration of the baseline network adopted in our experiment benchmark is shown in Table I where denotes kernels with spatial dimension, BN denotes Batch Normalization [43]
and LReLU denotes Leaky Rectified Linear Unit
[49] with . Our baseline architecture is similar to the one proposed in [50] with four key differences. Firstly, we choose to retain a proper pooling layer instead of performing convolution with a stride of as proposed in [50]. Secondly, Batch Normalization was applied after every convolution layer except the last one where the output goes through softmax to produce the class probability. In addition, LReLU activation unit was applied to the output of batch normalization. It has been shown that the adoption of BN and LReLU speeds up the learning process of the network by being more tolerant to the learning rate with the possibility of arriving at better minimas
[43, 51].Input layer 
 BN  LReLU 
 BN  LReLU 
 BN  LReLU 
MaxPooling 
 BN  LReLU 
 BN  LReLU 
 BN  LReLU 
MaxPooling 
 BN  LReLU 
 BN  LReLU 
LReLU 
Global Average over spatial dimension 
softmax activation 

Based on the configuration of the network topology, we compare the performance between standard linear convolution kernel (CNN), our proposed multilinear kernel (MLconv) and the lowrank (LR) structure proposed in [19]. The last two convolution layers were not replaced by LR or MLconv layer. It should be noted that BN and LReLU are applied to all three competing structures in our experiments while in [19], BN was not applied to the baseline CNN which could potentially lead to biased result.
IvB Datasets
IvB1 CIFAR10 and CIFAR100
CIFAR dataset [52] is an object classification dataset which consists of color images for training and for testing with the resolution pixels. CIFAR10 refers to the 10class classification problem of the dataset in which each class has images for training and images for testing while CIFAR100 refers to a more finegrained classification of the images into classes.
IvB2 Svhn
SVHN [53] is a wellknown dataset for handwritten digit recognition problem which consists of more than images of house numbers extracted from natural scenes with varying number of samples from each class. This dataset poses a much harder character recognition problem as compared to the MNIST dataset [10]. We used cropped images provided by the database from which each individual image might contain some distracting digits on the sides.
CNN  MLconv1  LR26  MLconv2  LR53  MLconv4  LR106  MLconv6  
Scratch  
Pretrained  
# Parameters  

CNN  MLconv1  LR26  MLconv2  LR53  MLconv4  LR106  MLconv6  
Scratch  
Pretrained  
# Parameters  

IvC Experimental settings
All networks were trained using both SGD optimizer [54] as well as Adam [55]. While the proposed structure tends to arrive at better minimas with Adam, this is not the case for the other two methods. For SGD optimizer, the momentum was fixed to . We adopted two sets of learning rate schedule and . Each schedule has initial learning rate and decreases to the next value after epochs where was crossvalidated from the set . We trained each network with maximum of and epochs for CIFAR and SVHN respectively. The batch size was fixed to samples for all competing networks.
Regarding data augmentation, for CIFAR dataset, random horizontally flipped samples were added as well as random translation of the images by maximum pixels were performed during the training process; for SVHN dataset, only random translation of maximum pixels was performed. For both dataset, no further preprocessing step was applied.
Regarding regularization, both weight decay and maxnorm [56] are individually and together exploited in our experiments. Maxnorm regularizer was introduced in [56] where it was used together with Dropout. During the training process, the norm of each individual filter is constrained to lie inside the ball of a given radius which was crossvalidated from the set . The weight decay hyperparameter was searched from the set . In addition, Dropout with was applied to the input and Dropout with was applied to the output of all pooling layers with the optimal obtained from the set . Due to the differences between the three competing structures, we observed that while the baseline CNN and LR networks work well with weight decay, applying weight decay to the proposed network structure tends to drive all the weight values close to zeros when is large, or the regularization effect is marginal when using a small value for , leading to the exhaustive search of suitable hyperparameter . On the other hand, maxnorm regularization works well with our method without being too sensitive to the performance.
For MLconv and LR structures, we experimented with several values for the rank parameter, namely for the proposed mapping and in Eq. (1) from [19]. In all of our experiments, we made no attempt to optimize and for each individual filter and layer in order to get the maximal compact structure, since such an approach is impractical in real cases. We instead used the same rank value throughout all layers. The experiments are, hence, different from [19] where the authors reported performance for different values of at each layer without discussing the rank selection method. The experiments were conducted with and the corresponding structures are denoted as MLconv1, MLconv2, MLconv4, MLconv6. The values of are selected so that the number of parameters in an LR network is similar to the number of parameters of its MLconv counterpart with given . The corresponding LR structures are denoted as LR26, LR53 and LR106, where the number denotes the value of . We did not perform experiments with , which corresponds to , since training the network is computationally much slower and falls out of the objective of this paper.
All of three competing structures training from scratch were initialized with random initialization scheme proposed in [57]. We additionally trained MLconv and LR structure with weights initialized from an optimal pretrained CNN on CIFAR dataset. The aforementioned protocols were also applied for this configuration. The weights of MLconv were initialized with CP decomposition using canonical alternating least square method [47], while for the LR structure we followed the calculation proposed in [19].
IvD Experimental results
After obtaining the optimal hyperparameter values, each network was trained for five times and the median value is reported. The second row of Tables II and III shows the classification errors of all competing methods trained from scratch on CIFAR10 and CIFAR100, respectively, while the third row shows the performance when initialized with a pretrained CNN. The last row reports the model size of each network. As can be seen from both Tables II and III, using the proposed multilinear filters leads to a reduction in memory, while outperforming the standard convolution filters in both coarse and finegrained classification in CIFAR datasets. More interestingly, in CIFAR100, a rank multilinear filter network attains an improvement over . As we increase the number of projections in each mode to , i.e. when using , the performance of the network increases by a small margin. In both CIFAR10 and CIFAR100, constraining gains memory reduction while keeping the performance relatively closed to the baseline CNN with less than increment in classification error. Further limiting to maximizes the parameter reduction to nearly with the cost of and increase in error rate for CIFAR10 and CIFAR100, respectively. A graphical illustration of the compromise between number of network’s parameters and classification error on CIFAR10 and CIFAR100 is illustrated in Figures 2 and 3, respectively.
The classification error of each competing network trained from scratch on SVHN dataset is shown in Table IV. Using our proposed MLconv layers, we achieved reduction in model size while slightly outperforming CNN. At the most compact configuration of MLconv structure, i.e. MLconv1, we only observed a small increment of in classification error as compared to CNN baseline. As we increased the complexity of MLconv layers, little improvement was seen with MLconv4 while MLconv6 layers became slightly overfitted.
Comparing the proposed multilinear filter with the low rank structure LR, all configurations of MLconv network significantly outperform their LR counterparts. Specifically, in the most compact configuration, MLconv1 is better than LR26 by and on CIFAR10 and CIFAR100, respectively. The margin shrinks as the complexity increases but the proposed structure consistently outperforms LR when training the network from scratch. Similar comparison results can be observed on SVHN dataset: using MLconv layers obtained lower classification errors as compared to LR layers at all complexity configurations. As opposed to the experimental results reported in [19], we observed inferior results of the LR structure compared to standard CNN when training from scratch. The difference might be attributed to two main reasons: we incorporated batch normalization into the baseline CNN which could potentially improve the performance of the baseline CNN; our baseline configuration has no fullyconnected layer to solely benchmark the efficiency of different filter structures as a feature extractor.
Error (%)  #Parameters  

CNN  
MLconv1  
LR26  
MLconv2 

LR53  
MLconv4 

LR106  
MLconv6 


MLconv1  LR26  MLconv2  LR53  MLconv4  LR106  MLconv6  MLconv1*  MLconv2*  MLconv4*  MLconv6*  
Theory  
Implementation  

One interesting phenomenon was observed when we initialized MLconv and LR with a pretrained CNN. For the LR structure, most configurations enjoy substantial improvement by initializing the network with weights decomposed from a pretrained CNN on CIFAR dataset. The contrary happens for our proposed MLconv structure, since most configurations observe a degradation in performance. This can be explained by the fact that LR structure was designed to approximate each individual 2D convolution filter at every input feature map and the resulting structure comes with a closedform solution for the approximation. With good initialization from a CNN, the network easily arrived at a good minimum while training a lowrank setting from scratch might have difficulty at achieving a good local minimum. Although the proposed mapping can be viewed as a form of convolution filter, the mapping in Eq. (4) embeds a multilinear structure, hence possessing certain degree of difference. Initializing the proposed mapping by applying CP decomposition, which has no closedform solution, may lead the network to a suboptimal state.
Table V reports the average forward propagation time of a single sample measured on CPU for all three network structures on CIFAR10. The second and third columns report the theoretical and actual speedups, respectively, measured by the number of multiplyaccumulate operations normalized with respect to their convolution counterparts. For the proposed MLconv structure, we report the computation cost of both calculation strategies discussed in Section III
. We refer to the first calculation strategy using the separable convolution as Scheme1, while the latter one using normal convolution as Scheme2. Results from Scheme2 are denoted with the asterisk. All the networks are implemented using Keras library
[58]with Tensorflow
[59] backend. It is clear that there is a gap between theoretical speedup and actual speedup, especially for the proposed structure implemented by an unoptimized separable convolution operation. In fact, at the time of writing, implementation of separable convolution operation is still missing in most libraries, not to mention efficient implementation. On the contrary, results from Scheme2 using normal convolution show a near perfect match between theory and implementation. This is due to the fact that normal convolution operation has been efficiently implemented and optimized in most popular libraries. This also explains why the computation gain of LR structure is inferior to MLconv structure (Scheme1) in theory but similar to ours in practice since LR structure is realized by normal convolution operation. The last four columns of Table V additionally prove the scalability of Scheme2 with respect to the hyperparameter rank as discussed in Section IIID.V Conclusions
In this paper, we proposed a multilinear mapping to replace the conventional convolution filter in Convolutional Neural Networks. The resulting structure’s complexity can be flexibly controlled by adjusting the number of projections in each mode through a hyperparameter . The proposed mapping comes with two computation schemes which either allow memory and computation reduction when is small, or the scalability when is large. Numerical results showed that with far fewer parameters, architectures employing our mapping could outperform standard CNNs. This are promising results and opens future research directions focusing on optimizing parameter on individual convolution layers to achieve the most compact structure and performance.
References

[1]
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies
for accurate object detection and semantic segmentation,” in
Proceedings of the IEEE conference on computer vision and pattern recognition
, pp. 580–587, 2014.  [2] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, realtime object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788, 2016.
 [3] M. A. Waris, A. Iosifidis, and M. Gabbouj, “Cnnbased edge filtering for object proposals,” Neurocomputing, 2017.
 [4] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, et al., “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012.

[5]
A. Graves, A.r. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in
Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pp. 6645–6649, IEEE, 2013.  [6] M. Zabihi, A. B. Rad, S. Kiranyaz, M. Gabbouj, and A. K. Katsaggelos, “Heart sound anomaly and quality detection using ensemble of neural networks without segmentation,” in Computing in Cardiology Conference (CinC), 2016, pp. 613–616, IEEE, 2016.

[7]
X. An, D. Kuang, X. Guo, Y. Zhao, and L. He, “A deep learning method for classification of eeg data based on motor imagery,” in
International Conference on Intelligent Computing, pp. 203–210, Springer, 2014.  [8] A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Using deep learning to detect price change indications in financial markets,” in European Signal Processing Conference (EUSIPCO), Kos, Greece, 2017.
 [9] A. Tsantekidis, N. Passalis, A. Tefas, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Forecasting stock prices from the limit order book using convolutional neural networks,” in Business Informatics (CBI), 2017 IEEE 19th Conference on, vol. 1, pp. 7–12, IEEE, 2017.
 [10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
 [11] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
 [12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
 [13] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
 [14] Y. Guo, A. Yao, and Y. Chen, “Dynamic network surgery for efficient dnns,” in Advances In Neural Information Processing Systems, pp. 1379–1387, 2016.
 [15] W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, and Y. Chen, “Compressing convolutional neural networks,” arXiv preprint arXiv:1506.04449, 2015.
 [16] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li, “Learning structured sparsity in deep neural networks,” in Advances in Neural Information Processing Systems, pp. 2074–2082, 2016.
 [17] Y. Gong, L. Liu, M. Yang, and L. Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, 2014.

[18]
D. Lin, S. Talathi, and S. Annapureddy, “Fixed point quantization of deep
convolutional networks,” in
International Conference on Machine Learning
, pp. 2849–2858, 2016.  [19] C. Tai, T. Xiao, Y. Zhang, X. Wang, and E. Weinan, “Convolutional neural networks with lowrank regularization,” arXiv preprint arXiv:1511.06067, 2015.
 [20] E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus, “Exploiting linear structure within convolutional networks for efficient evaluation,” in Advances in Neural Information Processing Systems, pp. 1269–1277, 2014.
 [21] M. Jaderberg, A. Vedaldi, and A. Zisserman, “Speeding up convolutional neural networks with low rank expansions,” arXiv preprint arXiv:1405.3866, 2014.
 [22] Y. Ioannou, D. Robertson, J. Shotton, R. Cipolla, and A. Criminisi, “Training cnns with lowrank filters for efficient image classification,” arXiv preprint arXiv:1511.06744, 2015.
 [23] I. Hubara, M. Courbariaux, D. Soudry, R. ElYaniv, and Y. Bengio, “Quantized neural networks: Training neural networks with low precision weights and activations,” arXiv preprint arXiv:1609.07061, 2016.
 [24] P. Gysel, M. Motamedi, and S. Ghiasi, “Hardwareoriented approximation of convolutional neural networks,” arXiv preprint arXiv:1604.03168, 2016.
 [25] S.C. Zhou, Y.Z. Wang, H. Wen, Q.Y. He, and Y.H. Zou, “Balanced quantization: An effective and efficient approach to quantized neural networks,” Journal of Computer Science and Technology, vol. 32, no. 4, pp. 667–682, 2017.
 [26] M. Denil, B. Shakibi, L. Dinh, N. de Freitas, et al., “Predicting parameters in deep learning,” in Advances in Neural Information Processing Systems, pp. 2148–2156, 2013.
 [27] A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov, “Tensorizing neural networks,” in Advances in Neural Information Processing Systems, pp. 442–450, 2015.
 [28] S. Lin, R. Ji, X. Guo, X. Li, et al., “Towards convolutional neural networks compression via global error reconstruction.,” in IJCAI, pp. 1753–1759, 2016.
 [29] V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky, “Speedingup convolutional neural networks using finetuned cpdecomposition,” arXiv preprint arXiv:1412.6553, 2014.
 [30] Y.D. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin, “Compression of deep convolutional neural networks for fast and low power mobile applications,” arXiv preprint arXiv:1511.06530, 2015.
 [31] M. Lin, Q. Chen, and S. Yan, “Network in network,” arXiv preprint arXiv:1312.4400, 2013.
 [32] Q. Li and D. Schonfeld, “Multilinear discriminant analysis for higherorder tensor data classification,” IEEE transactions on pattern analysis and machine intelligence, vol. 36, no. 12, pp. 2524–2537, 2014.
 [33] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.J. Zhang, “Discriminant analysis with tensor representation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 1, pp. 526–532, IEEE, 2005.
 [34] H. Zhou, L. Li, and H. Zhu, “Tensor regression with applications in neuroimaging data analysis,” Journal of the American Statistical Association, vol. 108, no. 502, pp. 540–552, 2013.
 [35] D. T. Thanh, J. Kanniainen, M. Gabbouj, and A. Iosifidis, “Tensor representation in highfrequency financial data for price change prediction,” arXiv preprint arXiv:1709.01268, 2017.
 [36] D. Tao, X. Li, X. Wu, and S. J. Maybank, “General tensor discriminant analysis and gabor features for gait recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 10, 2007.

[37]
H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, “Mpca: Multilinear principal component analysis of tensor objects,”
IEEE Transactions on Neural Networks, vol. 19, no. 1, pp. 18–39, 2008.  [38] W. Guo, I. Kotsia, and I. Patras, “Tensor learning for regression,” IEEE Transactions on Image Processing, vol. 21, no. 2, pp. 816–827, 2012.
 [39] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” arXiv preprint arXiv:1707.01083, 2017.
 [40] M. Wang, B. Liu, and H. Foroosh, “Design of efficient convolutional layers using single intrachannel convolution, topological subdivisioning and spatial “bottleneck” structure,”

[41]
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inceptionv4, inceptionresnet and the impact of residual connections on learning.,” in
AAAI, pp. 4278–4284, 2017.  [42] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.
 [43] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in International Conference on Machine Learning, pp. 448–456, 2015.

[44]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in
Advances in neural information processing systems, pp. 1097–1105, 2012.  [45] K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
 [46] J. Kossaifi, A. Khanna, Z. C. Lipton, T. Furlanello, and A. Anandkumar, “Tensor contraction layers for parsimonious deep nets,” arXiv preprint arXiv:1706.00439, 2017.
 [47] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009.
 [48] O. Ronneberger, P. Fischer, and T. Brox, “Unet: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and ComputerAssisted Intervention, pp. 234–241, Springer, 2015.
 [49] A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in Proc. ICML, vol. 30, 2013.
 [50] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller, “Striving for simplicity: The all convolutional net,” arXiv preprint arXiv:1412.6806, 2014.
 [51] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint arXiv:1505.00853, 2015.
 [52] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009.
 [53] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, p. 5, 2011.
 [54] D. E. Rumelhart, G. E. Hinton, R. J. Williams, et al., “Learning representations by backpropagating errors,” Cognitive modeling, vol. 5, no. 3, p. 1, 1988.
 [55] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
 [56] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.,” Journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
 [57] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing humanlevel performance on imagenet classification,” in Proceedings of the IEEE international conference on computer vision, pp. 1026–1034, 2015.
 [58] F. Chollet, “keras.” https://github.com/fchollet/keras, 2015.
 [59] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Largescale machine learning on heterogeneous systems,” 2015. Software available from tensorflow.org.
Comments
There are no comments yet.