Deep neural networks have achieved a great success in solving many practical problems. Deep learning methods are based on multiple levels of representation in learning. Each level involves simple but non-linear units for learning. Many deep learning networks have been developed and applied in various applications successfully. For example, convolutional neural networks (CNNs)[17, 21, 29]
have been well applied in computer vision problems, recurrent neural networks (RNNs)[9, 12, 25]
are used in audio and natural language processing. For more detailed discussions, see and its references.
In the recent years, more and more works focus on the theoretical explanations of neural networks. One important topic is the expressive power, i.e., comparing the expressive ability of different neural networks architectures. In the literature [7, 8, 15, 16, 24, 26, 27, 28, 30], researches have been done in the investigation of the depth efficiency of neural networks. It is natural to claim that a deep network can be more powerful in the expressiveness than a shallow network. Recently, Khrulkov et al.  applied a tensor train decomposition to exploit the expressive power of RNNs experimentally. In , Cohen et al. theoretically analyzed specific shallow convolutional network by using CP decomposition and specific deep convolutional network based on hierarchical tensor decomposition. The result of the paper is that the expressive power of such deep convolutional networks is significantly better than that of shallow networks. Cohen et al. in 
generalized convolutional arithmetic circuits into convolutional rectifier networks to handle activation functions, like ReLU. They showed that the depth efficiency of convolutional rectifier networks is weaker than that of convolutional arithmetic circuits.
Although many attempts in theoretical analysis success, the understanding of expressiveness is still needed to be developed. The main contribution of this paper is that a new deep network based on Tucker tensor decomposition is proposed. We analyze the expressive power of the new network, and show that it is required to use an exponential number of nodes in a shallow network to represent a Tucker network. Moreover, we compare the performance of the proposed Tucker network, hierarchical tensor network and shallow network on two datasets (Mnist and CIFAR) and demonstrate that the proposed Tucker network outperforms the other two networks.
The rest of this paper is organized as follows. In Section 2, we briefly review tensor decompositions. We present the proposed Tucker network and show its expressive power in Section 3. In Section 4, experimental results are presented to demonstrate the performance of the Tucker network. Some concluding remarks are given in Section 5.
2 Tensor Decomposition
A -dimensional tensor is a multidimensional array, i.e., . Its -th unfolding matrix is defined as . Given an index subset and the corresponding compliment set , , the -matricization of is denoted as a matrix , obtained by reshaping tensor into matrix.
We also introduce two important operators in tensor analysis, tensor product and Kronecker product. Given tensors and of order and respectively, the tensor product is defined as . Note that when
, the tensor product is the outer product of vectors.denotes Kronecker product which is an operation on two matrices, i.e., for matrices , , , defined by .
Moreover, we use to denote the set for simplicity.
In the following, we review some well-known tensor decomposition methods and related convolutional networks.
where , . The minimal value of such that CP decomposition exists is called the CP rank of denoted as .
which can be written as,
where , , , , , , . The minimal value of such that (2) holds is called Tucker rank of , denoted as . If , we simplicity denoted as .
HT decomposition: The Hierarchical Tucker (HT) Tensor format is a multilevel variant of a tensor decomposition format. The definition requires the introduction of a tree. For detailed discussion, see [10, 11, 13]. Given a tensor , . The hierarchical tensor decomposition has the following form:
where are the generated vectors of tensor . refer to level- rank. We denote . If all the ranks are equal to , for simple.
2.1 Convolutional Networks
Given a dataset of pairs , each object is represented as a set of vectors with . By applying parameter dependent functions , we construct a representation map . Object with
is classified into one of categories. Classification is carried out through the maximization of the following score function:
where is a trainable coefficient tensor.
The representation functions
have many choices. For example, neurons-type functionsfor parameters and point-wise non-linear activation . We list some commonly used activation functions here, for example hard threshold: for
, otherwise 0; the rectified linear unit (ReLU); and sigmoid .
The main task is to estimate the parametersand the coefficient tensors . The computational challenge is that the coefficient tensor has an exponential number of entries. We can utilize tensor decompositions to address this issue.
If the coefficient tensor is in CP decomposition, the network corresponding to CP decomposition is called shallow network(or CP Network), see Figure 1. We obtain its score function:
Note that the same vectors are shared across all classes . If set , the model is universal, i.e., any tensors can be represented.
If the coefficient tensors are in HT format like (8), the network refer to HT network. An example of HT network with is showed in Figure 2. Cohen et al.  analyzed the expressive power of HT network and proved that a shallow network with exponentially large width is required to emulate a HT network.
3 Tucker Network
Suppose for same vectors () in (6). Here be the -th unfolding of tensor . If set , the number of parameter is: . If set , the model is universe, any tensor can be represented by Tucker format, number of parameters are needed. Note that the score function for Tucker network:
The Tucker network architecture is given in Figure 3. The outputs from convolution layer are
where , . The last output, i.e., score value is given as follows:
where is tensor scalar product, i.e., the sum of entry-wise product of two tensors. Because is a order tensor of smaller dimension , it can be further decomposed with a deeper network. In this sense, Tucker network is also a kind of deep network.
The following theorem demonstrates the expressive power of Tucker network.
Let be a tensor of order and dimension in each mode, generated by Tucker form in (6). Define for all possible subsets , consider the space of all possible configurations for parameters. In the space, will have CP rank of at least almost everywhere, i.e.,the Lebesgue measure of the space whose CP rank is less than is zero.
The proof can be found in the supplementary section. We remark that if , when is even, the Lebesgue measure of the Tucker format space whose CP rank is less than is zero; when
is odd, the Lebesgue measure of the Tucker format space whose CP rank is less thanis also zero.
3.1 Connection with HT Network
In this subsection, to compare the expressive power of HT and Tucker network, we discuss the relationship between Tucker format and hierarchical Tucker tensor format firstly. Here we only consider hierarchical tensor format, its corresponding HT network (8) has been well discussed in .
We start it from hierarchical Tucker tensor, its HT network architecture is shown in Figure 2 . Given a order tensor, its hierarchical tensor format can always be written as
are vectors size of . Here we suppose that . Denote , we have , where
is a linear transformation that converts a matrix into a column vector.is diagonal operator that transform a vector into a diagonal matrix. Similarly, we have,
where , , and , .
From the property of Kronecker product: , we deduce that,
with , We can get that
with . It implies that a hierarchical tensor format can be written as a order Tucker tensor. Worth to say, from (7), the rank of is less than that of its factor matrices. Because of the structure of , , we get that and also . From the rank property, .
When the hierarchical tensor has layers, we can similarly deduced the following results.
Any hierarchical tensor can be represented as a order Tucker tensor and vice versa.
For any tensor , if , then .
According to Theorem 3, given a hierarchical Tucker network of width , we know that the width of Tucker network is not possible larger than .
4 Experimental Results
We designed experiments to compare the performance of three networks: Tucker network, HT network and shallow network. The results illustrate the usefulness of Tucker network. We implement shallow network, Tucker network and HT network with TensorFlow back-end, and test three networks on two different data sets: Mnist  and CIFAR-10 . All three networks are trained by using the back-propagation algorithm. In all three networks, we choose ReLU as the activation function in the representation layer
and apply batch normalization between convolution layer and pooling layer to eliminate numerical overflow and underflow.
We choose Neurons-type with ReLU nonlinear activation as representation map : . Actually the representation mapping now is acted as a convolution layer in general CNNs. Each image patch is transformed through a representation function with parameter sharing across all the image patches. Convolution layer in Figure 3 actually can been seen as a locally connected layer in CNN. It is a specific convolution layer without parameter sharing, which means that the parameters of filter would differ when sliding across different spatial positions. In the hidden layer, without overlapping, a 3D convolution operator size of is applied. Following is a product pooling layer to realize the outer product computation . It can be explained as a pooling layer with local connectivity property, which only connects a neuron with partial neurons in the previous layer. The output of neuron is the multiplication of entries in the neurons connected to it. The fully-connected layer simply apply the linear mapping on the output of pooling layer. The output of Tucker network would be a vector corresponding to class scores.
The MNIST database of handwritten digits has a training set of 60000 examples, and a test set of 10000 examples with 10 categories from 0 to 9. Each image is ofpixels. In the experiment, we select the gradient descent optimizer for back-propagation with batch size 200, and use a exponential decay learning rate with 0.2 initial learning rate, 6000 decay step and 0.1 decay rate. Figure 4
shows the training and test accuracy of three networks with 3834 number of parameters that have been learned. The parameters contains four parameters in batch normalization (mean, std, alpha, beta). We list filter size, strides size and rank as well in Table1. It is obvious that Tucker network outperforms shallow network and HT network. Moreover, we test the sensitivity of Tucker network with the change of rank, and compare the performance with the other two networks with the same number of parameters. Figure 5 illustrates the sensitivity performance, each value records the highest accuracy in training or test data. Tucker network can achieve the highest accuracy at most times.
CIFAR-10 data  is a more complicated data set consisting of 60000 color images size of with 10 classes. Here, we use the gradient descent optimizer with 0.05 learning rate and 200 batch size to train. In Figure 6 we report the training and test accuracy with 23790 trained parameters. Table 2 shows the parameter details of sensitivity test, whose results are displayed in Figure 7 . From Figure 6 and Figure 7 , Tucker network still has more excellent performance when fitting a more complicated data set.
In this paper, we presented a Tucker network and prove the expressive power theorem. We stated that a shallow network of exponentially large width is required to mimic Tucker network. A connection between Tucker network and HT network is discussed. The experiments on Mnist and CIFAR-10 data show the usefulness of our proposed Tucker network.
|Filter size||Strides size||
|Tucker||10||14 23||14 5||2|
|HT||3478||14||14 14||14 14||8|
|Shallow||10||16 21||12 7||2|
|Tucker||12||14 17||14 11||3|
|HT||3834||18||14 14||14 14||3|
|Shallow||16||14 16||14 12||3|
|Tucker||12||14 15||14 13||4|
|HT||5300||12||16 26||12 2||4|
|Shallow||10||20 21||8 7||4|
|Tucker||11||14 14||14 14||5|
|HT||8657||11||26 27||2 1||11|
|Shallow||17||20 23||8 5||10|
|Filter size||Strides size||
|Tucker||10||16 26||16 6||3|
|Shallow||10||17 26||15 6||3|
|Tucker||10||16 31||16 1||4|
-  M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016.
-  R. Caron and T. Traynor. The zero set of a polynomial. 2005.
-  J. D. Carroll and J.-J. Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of “eckart-young” decomposition. Psychometrika, 35(3):283–319, 1970.
-  N. Cohen, O. Sharir, and A. Shashua. On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, pages 698–728, 2016.
-  N. Cohen and A. Shashua. Convolutional rectifier networks as generalized tensor decompositions. In International Conference on Machine Learning, pages 955–963, 2016.
L. De Lathauwer, B. De Moor, and J. Vandewalle.
A multilinear singular value decomposition.SIAM journal on Matrix Analysis and Applications, 21(4):1253–1278, 2000.
-  O. Delalleau and Y. Bengio. Shallow vs. deep sum-product networks. In Advances in Neural Information Processing Systems, pages 666–674, 2011.
-  R. Eldan and O. Shamir. The power of depth for feedforward neural networks. In Conference on learning theory, pages 907–940, 2016.
-  F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with lstm. 1999.
-  L. Grasedyck. Hierarchical singular value decomposition of tensors. SIAM Journal on Matrix Analysis and Applications, 31(4):2029–2054, 2010.
-  L. Grasedyck and W. Hackbusch. An introduction to hierarchical (h-) rank and tt-rank of tensors with examples. Computational Methods in Applied Mathematics Comput. Methods Appl. Math., 11(3):291–304, 2011.
-  A. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. IEEE, 2013.
-  W. Hackbusch. Tensor spaces and numerical tensor calculus, volume 42. Springer Science & Business Media, 2012.
-  R. A. Harshman et al. Foundations of the parafac procedure: Models and conditions for an" explanatory" multimodal factor analysis. 1970.
Almost optimal lower bounds for small depth circuits.
Proceedings of the eighteenth annual ACM symposium on Theory of computing, pages 6–20. Citeseer, 1986.
-  J. Håstad and M. Goldmann. On the power of small-depth threshold circuits. Computational Complexity, 1(2):113–129, 1991.
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
-  V. Khrulkov, A. Novikov, and I. Oseledets. Expressive power of recurrent neural networks. arXiv preprint arXiv:1711.00811, 2017.
-  A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  Y. LeCun, Y. Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995.
-  Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. nature, 521(7553):436, 2015.
-  Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, and L. D. Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396–404, 1990.
-  J. Martens and V. Medabalimi. On the expressive efficiency of sum product networks. arXiv preprint arXiv:1411.7717, 2014.
-  T. Mikolov, S. Kombrink, L. Burget, J. Černockỳ, and S. Khudanpur. Extensions of recurrent neural network language model. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5528–5531. IEEE, 2011.
-  G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In Advances in neural information processing systems, pages 2924–2932, 2014.
-  R. Pascanu, G. Montufar, and Y. Bengio. On the number of response regions of deep feed forward networks with piece-wise linear activations. arXiv preprint arXiv:1312.6098, 2013.
-  T. Poggio, F. Anselmi, and L. Rosasco. I-theory on depth vs width: hierarchical function composition. Technical report, Center for Brains, Minds and Machines (CBMM), 2015.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
-  M. Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485, 2016.
-  L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279–311, 1966.
Appendix A Appendix A. Proofs
a.1 Proof of Theorem 1
In section 3, we presented Tucker network and showed its expressive power. To prove Theorem 1, we firstly state and prove three lemmas which will be needed for the proofs.
For any matricization of a tensor whose CP rank is ,
In Lemma 1, we give the lower bound of the CP-rank. If the matricization of a tensor has matrix rank , then using the above lemma, we get that the CP rank of is larger than .
For a order tensor who is in Tucker format, its matricization has the following form.
Given a order tensor whose Tucker format is , index subsets and , then
where . Then,
For simplicity, we denote
If each factor matrix of tensor has full column rank, i.e., has full column rank, then .
Proof of Theorem 1
According to Lemma 1, it suffices to prove that the rank of is at least almost everywhere. From Lemma 3, equivalently, we prove the rank of is at least almost everywhere.
For any , and all possible subsets and the corresponding compliment set , . We let , which simply holds the elements of . Because for all possible subsets . For all , we have . In the following, we will prove that the Lebesgue measure of the space that is zero.
Let be the top-left sub matrix of and is the determinant, as we know that is a polynomial in the entries of , according to theorem in, it either vanishes on a set of zero measure or it is the zero polynomial. It implies that the Lebesgue measure of the space whose is zero, i.e., the Lebesgue measure of the space whose rank less than is zero. The result thus follows. ∎
a.2 Proof of Theorem 2
In this subsection, we will prove Theorem 2, the connection of Tucker tensor format and hierarchical Tucker tensor format. The expressive power of hierarchical Tucker tensor network has been well discussed in .
In Section 2, we defined -matricization which is a kind of general matricization. In the following, we simply consider the proper order matricization of tensor, denoted as the matrix here, for example, for , ; for , .
The hierarchical tensor decomposition format is given as follows: