Tensor Regression Networks with various Low-Rank Tensor Approximations

12/27/2017 ∙ by Xingwei Cao, et al. ∙ McGill University 0

Tensor regression networks achieve high rate of compression of model parameters in multilayer perceptrons (MLP) while having slight impact on performances. Tensor regression layer imposes low-rank constraints on the tensor regression layer which replaces the flattening operation of traditional MLP. We investigate tensor regression networks using various low-rank tensor approximations, aiming to leverage the multi-modal structure of high dimensional data by enforcing efficient low-rank constraints. We provide a theoretical analysis giving insights on the choice of the rank parameters. We evaluated performance of proposed model with state-of-the-art deep convolutional models. For CIFAR-10 dataset, we achieved the compression rate of 0.018 with the sacrifice of accuracy less than 1

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Tensor has been attracting increasing interests from the machine learning community over past decades. One of the reasons for such appreciation towards tensor is the natural representation of multi-modal data using the tensor structure. Such multi-modal dataset are often encountered in scientific fields including image analysis

[14], signal processing [3] and spatio-temporal analysis [1, 23]. Tensor methods allow statistical models to efficiently learn multilinear relationship between inputs and outputs by leveraging multilinear algebra and efficient low-rank constraints. The low-rank constraints on higher-order multivariate regression can be interpreted as a regularization technique. As shown in [19], efficient low-rank multilinear regression model with tensor response can improve the performance of regression.

Incorporating tensor methods into deep neural network has become a prominent area of studies. In particular, over the past decade, tensor decomposition and approximation algorithms have been introduced to deep neural networks, notably for 1) efficient compression of the model with low-rank constraints [17] and 2) leveraging the multi-modal structure of the high-dimensional dataset [9]

. For illustration, Kossaifi et al. proposed tensor regression layer (TRL) which replaces the vectorization operation and fully-connected layers of Convolutional Neural Networks (CNNs) with higher-order multivariate regression

[9]. The advantage of such replacement is the high compression rate of the model while preserving multi-modal information of dataset by enforcing efficient low-rank constraints. Given such high-dimensional dataset, the vectorization operation will lead to the loss of multi-modal information. The higher-level dependencies among various modes are lost when the data is mapped to a linear space. For instance, applying flattening operation to a colored image (rd-order tensor) will remove the relationship between the red-channel and the blue-channel. Tensor regression layer is able to capture such multi-modal information by performing multilinear regression tasks between the output of the last convolutional layer and the softmax.

Following [9], we investigate the property and performance of tensor regression layers from the perspectives of regularization and compression. We interpret low-rank constraints as a regularization technique for higher-order multivariate regression and enforce low-rank constraints on the weight tensor between output tensors of CNN and output vectors. Furthermore, we compare tensor regression layer with various tensor decomposition approximations. We aim to provide a comparative insight on different low-rank constraints that can be enforced on higher-order multivariate regression. We compare the performances of TRL using Tucker, CP and Tensor Train decompositions in a small standard CNN on MNIST and Fashion-MNIST. We also investigate such comparison in Residual Networks (ResNet) [5, 6] on CIFAR-10. To investigate the regularization effect, we employed shallow CNNs and trained them with different numbers of training samples and compare the performances.

We show that a compression rate of 54 can be achieved using TT decomposition with a sacrifice of accuracy less than 0.3% with respect to the weight matrix of a 32-Layer Residual Network with fully-connected layer on CIFAR-10 dataset. Surprisingly, we also show that an even better compression rate with a smaller loss in accuracy on CIFAR-10 can be achieved by simply using global average pooling (GAP) followed by a small fully connected layer. However, using the same trick on the smaller CNN on MNIST led to very poor results.

The remaining of this paper is organized as follows. We start by reviewing background knowledge of multilinear algebra and tensor decomposition formats in Section 2. In Section 3, we present and investigate tensor regression layer with different tensor decomposition formats. We show that global average pooling GAP) layer is a special case of TRL with Tucker decomposition in Section 4. In Section 5 we present a simple analysis of low-rank constraints showing how particular choices of the tensor rank parameters can drastically affect the expressiveness of the network. We demonstrate empirical performance of low-rank TRL in Section 6 followed by discussion and conclusion of our work in Section 7.

2 Background

(a) Tensor Train Decomposition.

(b) Tucker Decomposition.
Figure 3: Tensor network representations of Tucker and Tensor Train decomposition of an input tensor in a space . Circular nodes and edges represent tensors and contraction operation between two tensors respectively.

2.1 Tensor Algebra

We begin with a concise review of notations and basics of tensor algebra. For a more comprehensive review, we refer the reader to [8]. Throughout this paper, a vector is denoted by boldface lowercase letter, e.g. . Matrices and higher-order tensors are denoted by boldface uppercase and calligraphic letters respectively, e.g.  and . Given an th-order tensor , its th entry is denoted by or , where . The notation denotes the range of integers from to inclusive. Given a rd order tensor , its slices are the matrices obtained by fixing all but two indices; the horizontal, lateral and frontal slices of are denoted by , and respectively. Similarly, the mode-n fibers of are the vectors obtained by fixing every index but the n-th one. The mode-n matricization or mode-n unfolding of a tensor is the matrix having its mode- fibers as columns and is denoted by . Given vectors , the outer product of these vectors is denoted by and is defined by for all where . An N-th order tensor is called rank-one if it can be written as the outer product of N vectors (i.e. ). The -mode product of a tensor with a matrix is denoted by and is defined by

for all , where . Similarly, we denote an -mode product of a tensor and a vector by for all and it is defined by .

The Kronecker product of matrices and is the block matrix of size and is denoted by . Given matrices and , both of size , their Hadamard product (or component-wise product) is denoted by and defined by . The Khatri-Rao product of matrices and is the matrix defined by

(1)

where  (resp. ) denotes the th column of  (resp. B).

2.2 Various Tensor Decompositions

In this section we present three of the commonly used tensor decomposition formats: Candecomp/Parafac, Tucker and Tensor-Train.

CP decomposition.    The CP decomposition [2, 4] approximates a tensor with a summation of rank-one tensors [8]. The rank of the decomposition is simply the number of rank-one tensors used to approximate the input tensor: given an input tensor , its approximation with a CP decomposition of rank is defined by

(2)

In Eq. (2), denotes the CP approximation of where each matrix consists of the column vectors for .

We have the following useful expression of Eq. (2) in terms of the matricization of :

(3)

Tucker decomposition.    The Tucker decomposition approximates a tensor by the product of a core tensor and factor matrices for :

(4)

The matricization of from Eq. (4) can be written as

(5)

The tuple is the rank of the Tucker decomposition and determines the size of the core tensor . An example of a Tucker approximation of a fourth order tensor is given in Figure 3.

Tensor train decomposition.    The tensor train (TT) decomposition [18] provides a space-efficient representation for higher-order tensors. It approximates a tensor with the product of third order tensors called core tensors or simply cores. The rank of the TT decomposition is the tuple where .

Given a tensor , the approximation by TT decomposition is defined as

(6)

where denotes the matrix product.

In order to express Eq. (6) in terms of matricizations of , we first define the following contraction operation on core tensors.

Definition 1.

Given a set of core tensors in Eq. (6) for , we define as the product of core tensors for :

(7)

Similarly to , we define as the product of core tensors for where and . A tensor network representation of core separation is provided in Figure 4.

Figure 4: Visualization of the product of cores given by Tensor Train decomposition of a tensor in a space . The tensor network representations of and are presented.

Using Definition 1, the mode-n unfolding of a tensor in Eq. (6) where can be written as

(8)

3 Tensor Regression Layer

Figure 5: Visualization of tensor regression layer (TRL) using tensor networks. and a weight tensor are represented by circular nodes connected by edges which represents contraction operation between two tensors.

In this section, we introduce tensor regression layer via various low-rank tensor approximations. As stated in Section 1, the last fully-connected layer of traditional CNN represents a large proportion of the model parameters. In addition to such large consumption of computational resources, the flattening operation leads to the loss of rich multi-modal information in the last convolutional layer. Tensor regression layer [9] replaces such last flattening and fully connected layers of CNN by a multilinear map with low Tucker rank. In this work, we explore imposing other low-rank constraints on the weight tensor and we compare the compression and regularization effects of using either CP, Tucker or TT decompositions.

Given an input tensor and a weight tensor , we investigate the function where is the number of classes. Given such two tensors, the function is defined as

(9)

where

is a bias vector added to the product of

and . The tensor network representation of an example of Eq. (9) is given in Figure 5. The main idea behind tensor tensor regression layers is to enforce a low tensor rank structure on in order to both reduce memory usage and to leverage the multilinear structure of the input .

Throughout the paper, we denote a TRL with TT decomposition by TT-TRL. Similarly we use CP-TRL and Tucker-TRL for a TRL with CP or Tucker decomposition.

CP decomposition.    First we investigate applying CP decomposition to approximate the weight tensor . Using Eq. (2) and Eq.(3), Eq. (9) can be rewritten as

(10)

We can use this formulation to obtain the partial derivatives needed to implement gradient based optimization methods (e.g. backpropagation), indeed

(11)

for all of the matrices for . Furthermore, for a given mode , we can naturally arrange these partial derivatives into a third order tensor and obtain their expression using unfolding:

for , and

Tucker decomposition.    As described in Section 2, the Tucker decomposition approximates an input tensor by a core tensor and a set of factor matrices. We can rewrite Eq. (9) using approximation of the tensor by Tucker decomposition as

(12)

where the tensor is approximated with

(13)

The tensor network representations of Eq. (12) is shown in Figure 8. Given a tensor of size , the function maps such tensor to the space with low-rank constraints.

We can again obtain concise expressions for the partial derivatives using unfoldings, for example:

(14)
(15)

and

(16)

Tensor Train decomposition.    The tensor network visualization is given in Figure 8, where the weight tensor is replaced with its TT representation. Using Eq. (6) and (8), in the case of TT decomposition Eq. (9) can be rewritten as

(17)

where the second equality follows from the fact that . Similarly to the case of CP and Tucker decomposition, the partial derivatives can be summarized with

(18)

for all and , and

(19)

(a) TT-TRL

(b) Tucker-TRL
Figure 8: Tensor Network visualization of tensor regression layer via Tucker and TT decompositions. Each label attached to corresponding edge represents the dimension shared between two tensors by tensor contraction operation.

4 Tensor perspective on Global Average Pooling layer

In this section, we provide an insight on Global Average Pooling layer from the perspective of tensor algebra. In particular, we show that GAP layer is a special case of Tucker-TRL.

It is a traditional practice to apply flattening operation to the output tensor (i.e. the last convolutional layer) before extracting its features. The problem of such approach lies in the generalization ability to the test dataset. Some work on deep neural networks show that fully-connected layers are prone to overfitting, thus leading to poor performance on test dataset [7, 13, 11].

In order to tackle such generalizability problem and to provide regularization, Global Average Pooling (GAP) layer was presented by Lin et al. [13]. It replaces the combination of vectorization operation and fully-connected layer with averaging operation over all slices along the output channel axis. The output of a GAP layer is thus a single vector of size the number of output channels. GAP layer was empirically shown to significantly reduce the number of model parameters in CNNs [13].

The authors of [13] claims not only that GAP layer reduces the trainable model parameters but also that it can prevent the model from overfitting during the training stage. Over the last decade, GAP layer has been adopted to some of the most successful image classification models such as Residual Networks and VGG-16 [5, 20].

More general interpretation of the convolutional output is that it is a high-order tensor in a space . Given such tensor, the GAP layer will output a vector defined by

(20)

We here assume that the axis for the output channel corresponds to the last mode of the tensor . We now show that a GAP layer mapping to is equivalent to a specific Tucker-TRL with rank . Indeed, let

be the regression tensor of a Tucker-TRL, with where for each , and . We have

(21)

Observe that the composition of a GAP layer with a fully connected layer mapping to can also be achieved using a unique Tucker-TRL by setting to be the weight matrix of the fully connected layer instead of the identity. A graphical representation of this equivalence is shown in Figure 12.

(a) Substitute matrix to vector

(b) Average along the axis of output channel

(c) Linear Transformation
Figure 12: Tensor network representation of GAP layer. LABEL:sub@fig:trn_gap_step1 factor matrices are replaced with vectors for . LABEL:sub@fig:trn_gap_step2 contraction operation between and factor matrices are performed. LABEL:sub@fig:trn_gap_step3 most simplified version; the product of a matrix and a vector.

5 Observations on Rank Constraints

In this section, we provide a simple guideline for choosing one of the components of low-rank constraints enforced to TRL. In particular, we observe that the CP rank parameter and and the last Tucker/TT rank parameter affects the dimension of the image of the function computed by the TRL. For example, as a consequence of this observation, if a TRL is used as the last layer of a network before a softmax activation function in a classification tasks with

classes, setting the rank parameter to values greater than

leads to unnecessary redundancy, while setting it to smaller values can detrimentally limit the expressiveness of the resulting classifier.

First, we start with a simple lemma necessary to provide the upper-bound on the dimension of the image of the regression function. We show that if an input matrix admits a factorization, then a function which maps such matrix to a linear space has an upper-bound on the dimension of the image.

Lemma 1.

If with and , then where .

Proof.

Given such function f, the dimension of the image of the function f is , which is the dimension of the space that is spanned by column vectors of = . That is, . It is clear that each column vector of the matrix is linear combination of column vectors of from the equation where denotes -th column vector of . Since matrix is in the space , the dimension of the span of the column vectors of is upper-bounded by , namely . ∎

Using Lemma 1, we can provide upper-bounds on the dimension of the image spanned by the regression function of a TRL for different tensor rank constraints.

Proposition 2.

Let where . The following hold:

  • if admits a TT decomposition of rank , then ,

  • if admits a Tucker decomposition of rank , then ,

  • if admits a CP decomposition of rank , then .

Proof.

If admits a TT decomposition with TT rank , then by Eq. (6), we have . Using the matricization of given by Eq. (8), we can write as follows;

(22)

and consequently, since and are of size and respectively, we have by Lemma 1.

The other two points can be proven in a similar fashion using Eq. (4) and (5) for Tucker, and Eq. (2) and (3) for CP. ∎

We have shown that the dimension of the image mapped by the function is upper-bounded by one of the tensor rank parameters. We refer to this specific component of the rank tuple as the bottleneck rank.

Definition 2.

Given a regression tensor , if admits a Tucker Decomposition with rank , we define the rank as the bottleneck rank. Similarly, if admits a TT decomposition with TT-rank , we define as the bottleneck rank.

This observation on the rank constraints used in a tensor regression layer can provide a simple guideline for choosing the bottleneck rank. For instance, when a TRL is used as the last layer of an architecture for a classification task, setting the bottleneck rank to a value smaller than the number of classes could limit the expressiveness of TRL (which we will empirically demonstrate in Section 6.1), while setting it to a value higher than could lead to redundancy in the model parameters.

6 Experiments

In this section we provide experimental evidence which 1) supports our analysis on TRLs in Section 5 and 2) investigate the compressive and regularization power of the different low-rank constraints. We present experiments with tensor regression layer using CP, Tucker and TT decomposition on the benchmark datasets MNIST [12], FashionMNIST [22], CIFAR-10 and CIFAR-100 [10].

Figure 13: Best viewed in color. Test error as a function of the number of parameters in TRL. Performances of three types of TRL (CP, TT and Tucker) are compared in terms of regularization effect of TRL. Left: MNIST dataset. Right:

Fashion-MNIST dataset. For all entries, we run experiments for 5 times and presented with confidence interval with critical value of

.

6.1 MNIST and Fashion-MNIST dataset

MNIST dataset [12] consists of 1-channel images of handwritten digits from to . The dataset contains k training and a test set of k examples. The purpose of the experiment is to provide insights on regularization power of different low-rank constraints. We set our baseline classifier to be CNN with convolutional layers followed by

fully-connected layer. Rectified linear units (ReLU)

[15] were introduced between each layer as non-linear activations. We tested the model with three tensor approximations; CP, Tucker and TT. By applying various low-rank constraints, we aim to show that as such constraints become larger, the smaller the approximation error becomes, therefore the accuracy of the low-rank model approaches to that of the model without regularizations (i.e. low-rank constraints).

We concisely review the choice of low-rank constraints for Tucker, TT and CP models. Detailed experimental configuration is available online111https://github.com/xwcao/LowRankTRN. Given an output tensor from final convolutional layer where denotes the number of samples in one batch, we constrain the weight tensor with the rank of Tucker decomposition . Following Proposition 2, the bottleneck rank is set to for TRL with Tucker and TT constraints.

Following [11]

, we initialized the weights in each layer from zero mean normal distribution with standard deviation

. The bias term of each layer is initialized with constant . For Tucker-TRL, we conducted a total of experiments. This is per each low-rank Tucker-TRL where were set with constraints , and respectively. A set of experiments were conducted for TT-TRL as well. We set TT-rank to be and . For CP-TRL, we simply evaluated the performance with rank from a set .

We evaluate empirical performance of TRL with another MNIST-like dataset: Fashion-MNIST. The dataset consists of k training and k testing images where each sample belongs to one of ten classes of fashion items such as shoes, clothes and hats. We used the same CNN architecture and hyperparamters as for the MNIST dataset.

Experimental outcomes for both datasets are provided in Figure 13 where we can see that all low-rank approximation models exhibit similar performance in both MNIST and Fashion-MNIST dataset. As for the regularization effect, however, it is observable that as we relax the low-rank constraints, the accuracies of each model gradually converge to that of baseline model. This result illustrates the effect of regularization power that low-rank constraints provide. We also conducted experiments where we used GAP layer instead of fully-connected layer on both MNIST and Fashion-MNIST dataset. In both cases, the model performed very poorly compared to that of fully connected layer; with MNIST dataset and with Fashion-MNIST.

We conducted similar experiment to provide a empirical support to Proposition 2. In Section 5 we showed that the dimension of the image of TRL is upper-bounded by the bottleneck rank. We conducted experiments where we fix the bottleneck rank to be one of . The experimental result presented in Figure 14 shows the clear distinctions among models with different bottleneck ranks. It is observable that bottleneck rank affects the test accuracy by providing upper-bound to the dimension of the image of TRL.

Figure 14: Comparison of test error of TRL via TT and Tucker decomposition. We relax the low-rank constraints while fixing the bottleneck rank (see Definition 2) of both TT and Tucker decomposition. TT-n (resp. Tucker-n) denotes a model with the bottleneck rank to be n (resp. n). Left: MNIST dataset. Right: Fashion-MNIST dataset.

6.2 CIFAR-10 and CIFAR-100 dataset with Residual Networks

We evaluate the performance of tensor regression layer with another benchmark dataset; CIFAR-10 and CIFAR-100 with deep CNNs. CIFAR-10 dataset [10] consists of k training and k test images from 10 classes; airplane, automobile, bird, cat, deer, dog, frog, horse, ship and truck. Similarly, CIFAR-100 consists of colored images of 100 classes [10]. We employ Residual structure network [6] and replaced the GAP layer with CP, Tucker and TT-TRL. Following [6], we trained the model with initial learning rate of with momentum of . The learning rate is multiplied by at k and k iteration steps and the training process is terminated at k steps. The size of each batch was set to . We set the weight decay to

. The image is pre-processed with whitening normalization followed by the random horizontal flip and cropping with padding size of 2 pixels on each side.

The experimental result are reported in Table 1 for a -layer Residual network [6] on CIFAR-10 and a -layer ResNet on CIFAR-100. In order to compare the compression rate, we set the baseline model to be the Residual network with fully-connected layer instead of GAP layer. The errors in Table 1 are obtained by choosing the with the best validation score. The experiment shows that CP-TRL achieves comparable test accuracy to ResNet with GAP layer, however, GAP layer performed the best in terms of both compression and accuracy.

Layer Type Rank CIFAR-10 CIFAR-100
Vali Test CR Vali Test CR
FC - 8.36 8.28 1.0 36.68 36.36 1.0
GAP - 7.62 8.18 64.0 29.68 29.42 64.0
CP-TRL 5 8.32 8.43 91.0 34.64 36.01 455.1
50 8.18 8.11 9.1 30.92 30.73 45.5
100 8.42 8.05 4.6 31.28 31.72 22.7
Tucker-TRL 8.30 8.39 41.0 33.34 32.26 24.5
7.78 8.39 7.0 30.86 31.53 6.6
7.92 8.58 0.9 TF TF 1.0
TT-TRL 8.18 8.47 54.2 31.12 30.95 25.0
7.86 9.13 7.1 30.28 31.08 6.6
8.36 8.56 0.9 31.64 32.64 1.0
Table 1: Comparisons of errors (%) of ResNet-32 (resp. ResNet-164) model with different layers after the final convolutional layer on CIFAR-10 (resp. CIFAR-100) dataset. The training error at the termination of training stage for all models resulted to be . L denotes 10 and 100 for CIFAR-10 and CIFAR-100 dataset respectively. FC = Fully-Connected, TF = Training Failed, and CR = Compression Rate.
Layer Size of Training Data
100 500 2,000 15,000
Rank Vali Test Rank Vali Test Rank Vali Test Rank Vali Test
FC - 24.48 20.73 - 11.48 7.95 - 4.18 3.86 - 1.64 1.33
FC-L2 - 20.00 19.25 - 8.00 7.86 - 3.24 3.31 - 1.50 1.35
FC DO - 17.26 17.86 - 5.80 5.98 - 2.86 2.53 - 1.16 1.21
GAP - 25.30 22.10 - 11.02 10.46 - 4.54 4.85 - 2.78 3.15
CP-TRL 10 25.56 17.88 30 11.80 8.91 30 4.42 4.06 30 2.30 2.04
CP-TRL DO 10 14.82 16.06 30 6.28 5.91 30 2.42 2.75 30 1.82 1.38
Tucker-TRL [7,7,30,10] 25.98 22.45 [7,7,32,10] 9.04 8.29 [7,7,7,10] 4.08 3.73 [7,7,15,10] 1.76 1.65
Tucker-TRL DO [7,7,7,10] 12.70 13.11 [7,7,30,10] 5.16 5.49 [7,7,32,10] 2.56 2.28 [7,7,30,10] 1.26 1.17
TT-TRL [1,7,15,10,1] 23.24 20.39 [1,7,30,10,1] 9.08 8.59 [1,7,15,10,1] 3.64 3.82 [1,7,30,10,1] 1.66 1.36
TT-TRL DO [1,7,7,10,1] 14.96 14.85 [1,7,30,10,1] 5.42 5.18 [1,7,30,10,1] 2.42 2.28 [1,7,32,10,1] 1.24 1.31
(a) MNIST
Layer Size of Training Data
100 500 2,000 15,000
Rank Vali Test Rank Vali Test Rank Vali Test Rank Vali Test
FC - 78.30 78.33 - 49.74 49.56 - 27.92 28.57 - 16.66 17.64
FC-L2 - 76.64 75.73 - 46.06 44.25 - 26.54 27.11 - 15.68 16.80
FC DO - 75.12 74.37 - 41.20 41.41 - 26.14 28.27 - 19.64 20.22
GAP - 86.36 84.48 - 75.38 73.44 - 72.48 69.58 - 80.56 75.72
CP-TRL 30 79.02 77.65 30 55.58 57.34 10 32.46 33.85 30 20.32 21.11
CP-TRL DO 30 75.78 73.52 30 37.34 39.85 30 27.92 28.90 30 21.38 21.80
Tucker-TRL [8,8,16,10] 79.78 80.27 [8,8,16,10] 43.84 44.29 [8,8,32,10] 24.48 24.70 [8,8,16,10] 15.26 17.27
Tucker-TRL DO [8,8,32,10] 72.42 71.28 [8,8,32,10] 34.84 34.71 [8,8,16,10] 22.32 24.50 [8,8,64,10] 15.08 16.23
TT-TRL [1,8,64,10,1] 78.64 77.73 [1,8,64,10,1] 44.10 43.78 [1,8,64,10,1] 24.78 25.50 [1,8,32,10,1] 14.76 16.28
TT-TRL DO [1,8,64,10,1] 73.36 71.63 [1,8,64,10,1] 36.64 36.37 [1,8,64,10,1] 22.98 24.60 [1,8,64,10,1] 15.24 16.52
(b) SVHN
Layer Size of Training Data
100 500 2,000 15,000
Rank Vali Test Rank Vali Test Rank Vali Test Rank Vali Test
FC - 78.14 76.39 - 64.44 61.82 - 27.92 28.57 - 40.70 41.88
FC-L2 - 76.86 75.77 - 62.48 61.66 - 26.54 27.11 - 39.54 41.40
FC DO - 73.58 73.93 - 61.38 61.26 - 26.14 28.27 - 42.08 42.33
GAP - 71.90 71.74 - 60.92 60.14 - 72.48 69.58 - 57.26 57.56
CP-TRL 10 77.94 77.76 30 67.84 67.05 10 32.46 33.85 30 45.16 46.59
CP-TRL DO 10 74.46 75.29 30 62.40 61.11 30 27.92 28.90 30 47.20 48.57
Tucker-TRL [8,8,64,10] 77.60 77.21 [8,8,8,10] 63.84 64.63 [8,8,32,10] 24.48 24.70 [8,8,64,10] 40.28 41.36
Tucker-TRL DO [8,8,32,10] 74.02 73.59 [8,8,32,10] 59.58 58.54 [8,8,16,10] 22.32 24.50 [8,8,32,10] 40.20 40.51
TT-TRL [1,8,8,10,1] 74.78 75.11 [1,8,64,10,1] 63.94 62.70 [1,8,64,10,1] 24.78 25.50 [1,8,64,10,1] 38.16 38.57
TT-TRL DO [1,8,32,10,1] 72.44 73.51 [1,8,32,10,1] 57.88 58.30 [1,8,64,10,1] 22.98 24.60 [1,8,64,10,1] 38.38 38.64
(c) CIFAR-10
Table 5: On the regularization effect of TRL. The comparison of test/validation errors (%) with different numbers of training samples is provided. Each entry in the column # Train refers to the number of samples used to train each model. We used the same k validation samples to select the best model for all experiments. FC-L2 = FC layer with L2 regularization. DO = Dropout.

6.3 On the regularization effect of TRL

In this section, we investigate the performance of TRL focusing on its function as a regularization to convolutional neural networks. We used shallow CNNs with different train/validation split where the number of the training samples were kept to be small. We compare the performance of TRL with fully-connected layer and GAP layer. To improve the regularization performance, Dropout [21] and weight decay were included in the comparison. The training datasets are obtained by randomly selecting samples from the initial training dataset, and keeping k samples for validation for each train/validation split.

We evaluate the performance of each model on three datasets; MNIST, Street View House Numbers (SVHN) [16] and CIFAR-10. SVHN dataset consists of colored images of house numbers where it contains k and k samples for training and testing respectively. We employed a CNN with two (resp. three) convolutional layers for MNIST dataset (resp. CIFAR-10 and SVHN dataset). The dropout is inserted after the final convolutional layer.

The rank of each TRL is selected based on the dimensions of the output tensor as in Section 6.1. We run experiments with early stopping for all experiments where the maximum steps is set to for MNIST and to for SVHN and CIFAR-10. The best rate for dropout is selected based on the validation accuracy where the hyper-parameter is samples from . The decay factor for L2-regularization is similarly chosen from the set .

The outcome of the experiment is presented in Table 5. An unique behavior of TRL is observed in Table 5 where in most of the settings using dropout with Tucker and TT-TRL achieves better test accuracy than using dropout with a fully-connected layer.

7 Conclusion

Tensor regression layer replaces the last flattening operation and fully connected layers with tensor regression with low tensor rank structure. We investigate tensor regression layer with various tensor decompositions. TRL with CP, Tucker and TT decompositions were presented and investigated in this work. We show that the learning procedure for each type of tensor regression layer can be derived using tensor algebra. An analysis on the upper bound of the dimension of the image of the regression function is presented, where we show that the rank of Tucker decomposition and TT ranks affect such dimension.

We evaluated proposed models using benchmark dataset (i.e. handwritten digits and natural images). We did not observe significant differences in accuracy among TRLs with various decompositions for MNIST and CIFAR-10 dataset. The result using the state-of-the-art deep convolutional model shows that when compared to a baseline model with fully-connected layer, TRL with CP decomposition achieved the rate of compression with the sacrifice of accuracy . When compared to the Residual network with GAP layer, our model empirically exhibits comparable performance in both accuracy and compression rate.

References