Cascaded Subpatch Networks for Effective CNNs

03/01/2016 ∙ by Xiaoheng Jiang, et al. ∙ Tianjin University 0

Conventional Convolutional Neural Networks (CNNs) use either a linear or non-linear filter to extract features from an image patch (region) of spatial size H× W (Typically, H is small and is equal to W, e.g., H is 5 or 7). Generally, the size of the filter is equal to the size H× W of the input patch. We argue that the representation ability of equal-size strategy is not strong enough. To overcome the drawback, we propose to use subpatch filter whose spatial size h× w is smaller than H× W . The proposed subpatch filter consists of two subsequent filters. The first one is a linear filter of spatial size h× w and is aimed at extracting features from spatial domain. The second one is of spatial size 1× 1 and is used for strengthening the connection between different input feature channels and for reducing the number of parameters. The subpatch filter convolves with the input patch and the resulting network is called a subpatch network. Taking the output of one subpatch network as input, we further repeat constructing subpatch networks until the output contains only one neuron in spatial domain. These subpatch networks form a new network called Cascaded Subpatch Network (CSNet). The feature layer generated by CSNet is called csconv layer. For the whole input image, we construct a deep neural network by stacking a sequence of csconv layers. Experimental results on four benchmark datasets demonstrate the effectiveness and compactness of the proposed CSNet. For example, our CSNet reaches a test error of 5.68% on the CIFAR10 dataset without model averaging. To the best of our knowledge, this is the best result ever obtained on the CIFAR10 dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

CONVOLUTIONAL neural networks (CNNs) [11, 15]

have achieved a great success in the field of computer vision, including image classification

[9, 12, 22, 29, 30, 31] and object detection [6, 7, 17, 33, 18]. The underlying reason lies in the fact that CNN is able to learn a hierarchy of features [1, 3, 4, 34]

that can represent objects in different levels. Low-level features denote some visual features such as edges, dots, and textures, whereas high-level features represent objects in a semantic way. Low-level features are shared by all objects while high-level features are of high discriminability. High-level features are learned progressively from low-level features. All these features are in fact learned through a series of linear and non-linear transformations which are the primary elements of CNNs.

Typically, CNN consists of several computational building blocks: convolution, activation, and pooling. They work together to fulfill the task of feature extraction and transformation. Convolution takes inner product of the linear filter and the local region of input channel. Activation imposes a non-linear transformation on the convolutional results. Pooling gathers the responses of a given region. Among these three building blocks, convolutional block plays the most important role in CNN. It controls the number of feature maps (i.e., width of CNN) and the number of layers (i.e., depth of CNN). The width and depth determine the capacity of CNN. The size of neural network is a double-edged sword. On the one side, large size means large capacity. Large capacity makes it possible for deep networks to learn rich features which are essentially important for task of recognizing tens or even thousands of object categories. On the other side, large size typically means a larger number of parameters, which makes the enlarged network more prone to over-fitting especially when the number of labelled samples in the training set is limited. What is more, the main drawback of large network is the dramatically increased consumption of computational resources.

To construct a compact and powerful network, we propose a novel type of convolutional filters. Given a local patch, traditional CNNs typically use a convolutional filter which is the same size as the patch to extract features. We argue that this level of abstraction is not strong enough to generate robust features. Multi-Layer Perceptron (MLP)

[14] can be used to impose more complex transformation. However, it is still not complex enough to represent the input data which lies on a highly non-linear manifold. Therefore, in this paper, we propose to use cascaded subpatch filters to bring in much more complex structures to abstract the local patch within the receptive field. One subpatch filter contains two subsequent convolutional filters. The first one abstracts subpatches of the input patch. The second one is to fully connect all the output channels of the first one. Taking the convolutional output of previous subpatch filter as input, we repeat constructing new subpatch filters until the final output contains only one neuron in spatial domain. This results in the Cascaded Subpatch Network (CSNet). CSNet can be used to replace conventional convolutional layer to extract more complex and more robust features. We call the resulting layer a csconv layer. A deep neural network can be obtained by stacking multiple csconv layers. For clarity, in the rest of this paper, the overall deep network containing multiple csconv layers is called a CSNet.

The goal of the proposed method is to construct a more effective structure to abstract the local patch. Instead of designing a CNN that is too wide (i.e., too many feature maps in one layer) or too lengthy (i.e., too many layers), we present a novel neural network which is compact, yet powerful. Specifically, the contributions and merits of this paper are summarized as follows.

  1. We gain new insight into the convolutional block of CNN. When abstracting one local patch of size , we propose to use cascaded subpatch filters to replace conventional convolutional filter.

  2. Subpatch filter consists of an linear filter followed by a filter. Its purpose is to impose a complex transformation on subpatches of the input patch while reducing the number of parameters.

  3. Cascaded subpatch filters contain a sequence of subpatch filters and they together reach the goal of generating a more complex and more robust abstraction of the local patch. The cascaded subpatch filters can be regarded as one new convolutional kernel structure called csconv filter. Csconv filter abstracts local patch much better than conventional filter.

  4. Csconv filter is a flexible structure which can be constructed using a group of different subpatch filters according to size of the local region and the demanding number of parameters.

  5. We build several CSNets with different number of parameters to deal with different tasks. And our CSNets achieve the state-of-the-art performance on four widely used benchmark image classification datasets.

This paper is organized as follows. Section II reviews the related work. Section III presents the proposed CSNet method. The experimental results are given in Section IV. Finally, Section V concludes this paper.

Ii Related Work

Since the great success of AlexNet [12]

on the ImageNet Large Scale Visual Recognition Challenge (ILSCRC-2010), a number of attempts have been made to improve the architectures of CNN in order to achieve better accuracy. We divide these methods into the following three categories.

(1) Parameter adjusting

. Some researchers paid attention to the parameters of CNN, such as the sizes of convolutional filters, the strides of filters, the number of feature channels in each layer, and the number of convolutional layers. They tried to adjust the parameters to improve the performance of CNN through exhaustive experiments. Zeiler and Fergus

[25] visualized the trained CNN model and found that large filter size and large stride of the first convolutional layer could cause aliasing artifacts. Therefore, they used smaller receptive window size and smaller stride. Sermanet et al. [21] utilized smaller strides in the first convolution, larger number of feature maps, and larger number of layers. They achieved better results than the AlexNet. The VGG network [22] pushes the depth of CNN to up to 19 convolutional layers by using very small convolutional filters and gains a significant improvement. These above efforts can be viewed as preliminary explorations on how to construct networks with better performance.

(2) Structure designing. Another line of improvements go further into the designing of new CNN structures. Network in Network (NIN) [14] utilizes shallow Multi-Layer Perception (MLP) to increase the representational power of neural networks. MLP is a more complex structure which consists of multiple fully connected layers. Conventional linear convolution and MLP together result in a new convolutional structure called mlpconv. Mlpconv can be easily implemented by stacking additional convolutional layers on conventional convolutional layer. These convolutions actually enhance the connection between different feature channels. Therefore, mlpconv is able to abstract local regions much more effectively than conventional convolution. Szegedy et al. [24]

constructed a 22-layer GoogleNet by stacking dozens of Inception modules. Each Inception module contains a group of convolutional filters of different sizes which aim at capturing information of multiple scales. However, such an Inception module is too wide to be efficiently used in a very deep network. To overcome the disaster of having too many parameters, GoogleNet takes advantage of

convolutions as dimension reduction modules to remove computational bottlenecks. As a result, it allows for increasing not just the depth but also the width of the GoogleNet without significant increasing in parameters.

(3) Deeper and wider networks. Since increasing the size of CNN is the most straightforward way to improve their performance, researchers went even further into designing much deeper networks which are up to hundreds or even thousands of layers. Highway Networks [23] make it possible to train very deep networks even with hundreds of layers by using adaptive gating units to regulate the information flow. More importantly, Highway Networks are able to train deeper networks without sacrificing generalization ability. The 32-layer highway network presented in [23] achieved the state-of-the-art performance on the CIFAR10 [37] dataset. Based on Highway Networks, He et al. [32] recently presented a residual learning framework to effectively train networks which are substantially deeper than ever used. They constructed residual nets (ResNest) with a depth of up to 152 layers and evaluated them on the ImageNet dataset. They also presented ResNets with 100 and 1000 layers and evaluated them on the CIFAR10 dataset. They argued that the depth of representations is of great importance for many visual recognition tasks. In addition to the depth of CNN, the width of CNN is also very important. Shao et al. [2] pointed out that the combination of multicolumn deep neural networks could enhance the robustness. Instead of simply averaging the outputs of multicolumn predictions, they learned a compact representation from multicolumn deep neural networks by embedding the features of all the penultimate layers into a multispectral space. The resulting features are then used for classification. Their multispectral neural networks (MSNN) in fact make use of the complementary information captured by different neural networks. Since the MSNN has to use multiple networks that do not share parameters, the computation increases with the number of networks.

We agree that both width and depth of networks are important for the tasks of visual recognition. However, larger capacity does not guarantee higher accuracy. Given a dataset with limited samples, when a network reaches its peak performance, it is difficult to further improve performance by simply adding more feature maps or stacking more convolutional layers. This means that the discriminability of networks does not increase infinitely with the size of networks. The performance comparison with ResNet110 [32] and ResNet1202 [32] on the CIFAR10 dataset supports our viewpoint. ResNet110 is a 110-layer CNN with 1.7M parameters, and ResNet1202 is a 1202-layer CNN with up to 19.4M parameters. However, ResNet110 achieves a test error of whereas ResNet1202 achieves a test error of . That is, the classification ability of deep neural network may suffer from excessive parameters. Therefore, it is important to explore new methods to learn features in a more effective way.

Iii Proposed Method

This paper is aimed at using subpatch filters to construct compact and powerful CNNs. One of the characteristics of the proposed method is that the size of subpatch filter is smaller than that of the patch to be presented. In our method, cascaded subpatch filters are used to represent a patch. We take the cascaded subpatch filters as a whole and call it csconv filter. Applying the csconv filters layer by layer results in a deep CNN which we call CSNet. In this section, we first introduce the subpatch filter. Next, we describe cascaded subpatch filters. Then, the CSNet is presented. Finally, analysis of the computational complex of CSNet is given.

Fig. 1: A subpatch filter of size consists of a filter and a filter . The input is a patch P of size and the convolutional output is a patch of size with and .
(a) A conventional filter.
(b) An -stage csconv filter
Fig. 2: Comparison between conventional filter and the proposed csconv filter. (a) A conventional convolutional filter that is the same size as the input patch of size . (b) An -stage csconv filter . The input is a patch P of size and the final output is of size .

Iii-a Subpatch Filter

The task is to represent an input patch where stands for the spatial size and

is the number of channels. Throughout this paper, the spatial size is used to express the patch size. By vectoring the three-order tensor,

P can be expressed as an dimensional column vector . Conventional convolution uses a linear filter whose size is the same as the patch X. The conventional convolution can be computed by inner product

(1)

where is the number of output channels. The convolution converts the patch of spatial size into a scalar . For the sake of notation consistence, we use to represent the size of feature . Fig. 2(a) shows the conventional convolution.

To make the feature representation more effective, we propose to utilize cascaded subpatch filters to transform the patch from size to . Let is subpatch of X with and . The number of overlapping subpatches of size in the patch of size is . A subpatch filter consists of two subsequent filters with the size of the first filter being and the size of the second filter being . To explicitly show that a subpatch filter is composed of two basic filters and , we denote the subpatch filter by and denote the size of subpatch filter by . We call the second filter channel filter because its function is to fully connect different channels. We call the first filter spatial filter because its size is larger than and its role is to extract features from both spatial and channel domain.

The inner product between the spatial filter and the subpatch x is

(2)

where is the number of output channels. Express the output of the spatial filter as a -dimensional feature vector . Taking the feature vector as the input of the channel filter , the second inner product is obtained by

(3)

where is the number of output channels.

Eq. 2 and Eq. 3 are only involved in one subpatch. There are subpatches. So we apply Eq. 2 and Eq. 3 on all the subpatches. That is, the subpatch filter of size convolves with the input patch

. The convolution is conducted without zero-padding. Consequently, the output

of the convolution with subpatch filter is a patch of size where is and is . Fig. 1 demonstrates one subpatch filter of size .

(a) A conventional filter
(b) A two-stage csconv filter
(c) A three-stage csconv filter
Fig. 3: Abstrcating the input patch of size using different filters. (a) Convolution with a conventional filter of size , directly generating an output of size . (b) Convolution with a two-stage csconv filter , generating one intermediate feature layer (consisting of a number of feature channels) of size . (c) Convolution with a three-stage csconv filter , generating two intermediate feature layers of sizes and , respectively.

Iii-B Generating a Csconv Filter by Cascaded Subpatch Filters

In the previous section, we explicitly denote a subpatch filter by where indexes channels. When there are several subpatch filters of different sizes, for the sake of clarity, we denote the -th subpatch filter of size by . Fig. 1 shows that convolving a subpatch filter of size within the input patch of size results in an output patch of size . But our goal is to output a patch to represent the input P. This goal can be arrived at by convolving the output patch with another subpatch filter of size with and . The size of the output of is . It can be noted that and . That is, once a subpatch filter is used, the size of the output patch is decreased. The subpatch filters are subsequently used until the output is of size . Specially, th sizes of spatial filters can be expressed as:

(4)

It is noted that the size of penultimate output patch is the same as that of the spatial filter of the last subpatch filter.

Suppose that subpatch filters are finally used to obtain a output patch. We denote the cascaded subpatch filters by . If all the subpatch filters have the same size (i.e., ), then the cascaded filter can be denoted by . We call the cascaded subpatch filters -stage csconv filter. Fig. 2(b) demonstrates an -stage csconv filter.

Given a local patch, different filters can be used to deal with it. An example of conventional filter, two-stage csconv filter, and three-stage csconv filter is shown in Fig. 3. The input patch is of size with channels, the conventional filter in Fig. 3(a) is of size , the two-stage csconv filter in Fig. 3(b) is , and the three-stage csconv filter in Fig. 3(c) is . In Fig. 3(a), the convolution directly generates an output of size with channels. In Fig. 3(b), feature channels of size are obtained by applying the subpatch filter, and then an output of size with channels is obtained by applying the subpatch filter on them. In Fig. 3(c), feature channels of size are firstly obtained by applying the first subpatch filter. And then feature channels of size are obtained by applying the second subpatch filter on feature channels of size . Finally, an output of size with channels is obtained by applying the third subpatch filter on feature channels of size . It is seen that the conventional convolution is the most simplest one and that the csconv convolution with a three-stage csconv filter is the most complex one.

Iii-C Form Cascaded Subpatch Network (CSNet) by Stacking Csconv Layers

Fig. 4: The overall structure of CSNet. The number of csconv filters and the number of subpatch filters in each csconv filter can be tuned to deal with different tasks.

Let be a csconv filter. Applying on an input patch yields a unit and convolving over the whole input channels yields a convolutional layer (called csconv layer) containing a number of units. As the conventional CNN, we can create a new CNN (called CSNet) by stacking a number of csconv layers: with . It is noted that we express both spatial filter and channel filter as four-order tensors by explicitly writing the number of input channels ( for spatial filter and for channel filter) and the number of output channels ( for spatial filter and for channel filter).

It is noted that the number of subpatch filters of a csconv filter is determined by the spatial size of the input patch and the spatial size of each subpatch filter (see Eq. 4). It is also noted that different csconv layers can have either different or the same configuration of csconv filters. For example, the first three csconv layers of one CSNet can have csconv filters of , , and , respectively. Fig. 4

shows the overall structure of the proposed CSNet. The first two csconv layers both have a two-stage csconv filter. The number of csconv filters and the number of subpatch filters in each csconv filter can be tuned according to different tasks. In the proposed CSNet, Rectified Linear Units (ReLUs)

[16] follows the output of each convolution of the subpatch filter. Sub-sampling layers can be added in between the csconv layers as in CNN if necessary.

Iii-D Computational Complex Analysis

Though the csconv convolution is much more complex than conventional convolution, it does not mean that the parameters of one deep CSNet have to be very huge. A comparison of parameters consumed by conventional convolution and the csconv convolution is shown in Tab. I. Suppose that a conventional convolution has a filter of size (see Fig. 3(a)), where is the size of the convolutional filter in spatial domain, is the number of input feature channels, and is the number of output feature channels. The corresponding three-stage csconv convolution (, see Fig. 3(c)) can be implemented with different number of input channels and output channels. Table I presents configurations of two common csconv layers denoted by csconv 1 and csconv 2. As shown in Table I, the parameters consumed by conventional convolution, csconv 1 and csconv 2 are , and , respectively. Therefore, the difference between conventional convolution and csconv 1 is . If , then the number of parameters consumed by csconv 1 is no larger than that of conventional convolution. Similarly, the difference between conventional convolution and csconv 2 is . If , then the number of parameters consumed by csconv 2 is no larger than that of conventional convolution. Especially, if , then , , and . It is obviously seen that the number of parameters consumed by conventional convolution is lager than that of the proposed csconv convolution.

In case that and do not satisfy the constraints above, the convolution can be used as reduction layer to reduce the number of intermediate output feature channels. This can guarantee that the total number of parameters consumed by csconv convolution is no larger than that of conventional convolution. Since the parameters of each csconv layer are no larger than those of the corresponding conventional convolutional layer, the total parameters of one deep CSNet are also no larger than those of the corresponding conventional neural network.

Method Conventional csconv 1 csconv 2
Structure
#params
#params
TABLE I: The number of parameters consumed by conventional convolution and csconv convolution

Iv Experimental Results

We evaluate the proposed CSNet on four standard benchmark datasets: CIFAR10 [39], CIFAR100 [39], MNIST [5], and SVHN [39]. We compare our CSNets with a dozen well known networks that have achieved the state-of-the-art performance on the four datasets. These networks include Maxout (Maxout Networks) [8], NIN (Network in Network) [14]

, NIN+LA (Networks with Learned Activation Functions)

[26], FitNet (Thin and Deep Networks) [19], DSN (Deeply Supervised Networks) [13], DropConnect (Networks using Dropconnect) [27], dasNet (Deep Attention Selective Networks) [35], Highway (Networks Allowing Information Flow on Information Highways) [23], ALL-CNN (ALL Convolutional Networks) [38], RCNN (Recurrent Convolutional Neural Networks) [28], and ResNet (Deep Residual Networks) [32].

Iv-a Configuration

We adopt the global average pooling scheme introduced in [14] on the top layer of CSNet. We also incorporate dropout layers with dropout rate of 0.5 [8]

. In addition, we use Batch Normalization (BN)

[10] to accelerate the training stage. The CSNet is implemented using the MatConvNet [40] toolbox in the Matlab environment. We follow a common training protocol [8]

in all experiments. We use stochastic gradient descent technique with mini-batch of size 100 at a fixed constant momentum value of 0.9. Initial value for learning rate and weight decay factor is determined based on the validation set. The proposed CSNet is easy to converge and no particular engineering tricks are adopted in all our experiments. All the results are achieved without using the model averaging

[12] techniques which can help improve the performance.

To comprehensively evaluate the performance of the proposed CSNet, we design three CSNets of different architectures, each of which has different number of parameters. Our small CSNet (CSNet-S), middle CSNet (CSNet-M) and large CSNet (CSNet-L) have 0.96M, 1.6M and 3.5M parameters, respectively. The configurations of CSNet-S, CSNet-M, and CSNet-L are given in Table II. And the corresponding overall structures are presented Fig.6. Though the three CSNets are specifically designed for the CIFAR10 dataset, they are also applied on the other three datasets with all the parameters almost remaining the same. The only modification is to change the number of output feature channels of the last csconv layer from 10 to 100 on CIFAR100 dataset.

As shown in Table II, the CSNet-S and the CSNet-M have three csconv layers, and the CSNet-L has four csconv layers. Since the input sample is small ( or ), the receptive field of the filters adopted by traditional methods is typically of size . Therefore, our CSNets use csconv filters of to replace linear filters of size . Fig. 5 shows the two-stage csconv filter used in our experiment. As shown in Fig. 6

, max-pooling follows the first two csconv filters of each CSNet. Average-pooling is applied after the last csconv layer to assign one single score for each class. Softmax classifier is then used to recognize the objects.

Fig. 5: Abstracting one patch of size with a two-stage csconv filter
(a) CSNet-S
(b) CSNet-M
(c) CSNet-L
Fig. 6: Overall structures of three different CSNets. The three CSNets are designed for the CIFAR10 dataset. However, they are also applied on the other three datasets with all the parameters almost remaining the same. (a) The CSNet-S with three csconv layers. (b) The CSNet-M with three csconv layers. (c) The CSNet-L with four csconv layers.
CSNet-S
Patch size 5x5
#params 0.96M
CSNet-M
Patch size 5x5
#params 1.6M
CSNet-L
Patch size 5x5
#params 3.5M
TABLE II: Configurations of three different CSNets. Two-stage csconv filters are used to represent patches of size

Iv-B Results on the CIFAR10 Dataset

CIFAR10 dataset [37] consists of 10 classes of images with 50K training images and 10K testing images. These images are color images including airplanes, automobiles, ships, trucks, horses, dogs, cats, birds, deers and frogs. Before training, we preprocess these images using global contrast normalization and ZCA whitening. We carry on experiments with and without data augmentation, respectively. For a fair comparison, we obtain the augmented dataset by padding 4 pixels on each side, and then doing random cropping and random flipping on the fly during training. The augmented data is denoted by CIFAR10. During testing, we only evaluate the single view of the original color image.

Methods #layers #params CIFAR10 CIFAR10
NIN[14] 9 0.97M 10.41 8.81
CSNet-S 12 0.96M 8.33 6.98
ResNet-110[32] 110 1.7M - 6.43
CSNet-M 12 1.6M 8.15 6.38
ResNet-1202[32] 1202 19.4M - 7.93
CSNet-L 16 3.5M 7.74 5.68
TABLE III: Quick overview of comparison between (small, middle and large) CSNets and the corresponding counterparts. The results are reported on CIFAR10 in the form of classification error (in %)

To have a quick overview of the performance of the CSNets, we firstly compare CSNets with two well known neural networks on this dataset. The first one is the classic NIN network which has 0.97M parameters. The second one is a new network called ResNet [32] which is the champion of the ILSVRC 2015 [20] classification task. ResNet-110 [32] is a really deep neural network which has up to 110 layers and 1.7M parameters. ResNet-1202 [32] is even much deeper and has 19M parameters. It can be seen that the CSNet-S (0.96M) has a little fewer parameters than NIN, and that the CSNet-M (1.6M) has 0.1M fewer parameters than ResNet110, and that the CSNet-L (3.5M) has much fewer parameters than ResNet1202.

The comparison results are presented in Tab. III. Compared with NIN, the CSNet-S reduces the test error from to (without data augmentation), which improves the performance by more than two percent. The CSNet-M obtains a test error of which is a slightly lower than of ResNet110 (with data augmentation). However, the CSNet-M has only 12 layers which are much fewer than the 110 layers of ResNet-110. Therefore, it is much easier to train CSNet-M than ResNet-110. Unlike ResNet-1020 which degrades the performance due to the huge parameters, our CSNet-L further reduces the test error to . The above comparison results demonstrate the superiority of the proposed CSNets. A comprehensive comparison of various methods is presented in Tab. IV. It can be seen that the CSNet-S is already among the state-of -the-art results. The CSNet-M surpasses ResNet-100 by and the CSNet-L surpasses ResNet-100 by .

Methods CIFAR10 CIFAR10
Maxoutt[8] 11.68 9.38
NIN[14] 10.41 8.81
NIN+LA[26] 9.59 7.51
FitNet[19] - 8.39
DSN[13] 9.75 8.22
DropConnect[27] 9.41 -
dasNet[35] 9.22 -
Highway[23] - 7.54
ALL-CNN[38] 9.08 7.25
RCNN-160[28] 8.69 7.09
ResNet-110[32] - 6.43
ResNet-1202[32] - 7.93
CSNet-S 8.33 6.98
CSNet-M 8.15 6.38
CSNet-L 7.74 5.68
TABLE IV: Classification error (in %) for CIFAR10 using various methods. A ‘-’ indicates the cited work did not present results for that dataset

Iv-C Results on the CIFAR100 Dataset

The CIFAR100 dataset [37] is just like the CIFAR10 dataset. It has the same amount of training images and testing images as the CIFAR10. However, CIFAR100 contains 100 classes which are ten times of those of CIFAR10. Therefore, the number of images in each class is only one tenth of CIFAR10. The 100 classes in CIFAR100 are grouped into 20 super-classes. Each image has two labels. One is the ”fine” label indicating the specific class and the other one is the ”coarse” label indicating the super-class. Considering the number of training images for each class, it is much more difficult to recognize the 100 classes of CIFAR100 than the 10 classes of CIFAR10. There is no data augmentation for CIFAR100. We use the same data preprocessing methods as in CIFAR00.

Methods CIFAR100
Maxout[8] 38.57
NIN[14] 35.68
NIN+LA[26] 34.40
FitNet[19] 35.04
DSN[13] 34.57
dasNet[35] 33.78
ALL-CNN[38] 33.71
Highway[23] 32.24
RCNN-160[28] 31.75
CSNet-M 30.24
TABLE V: Classification error (in %) for CIFAR100 using various methods

Since there are 100 classes to be recognized, we adopt the CSNet-M in this experiment. The only difference is that the last convolutional layer of the third csconv layer outputs 100 feature channels, each of which is then averaged to generate one score for one specific class. Details of performance comparison are shown in Tab. V. It can be seen that CSNet-M obtains a test error of for CIFAR100, which surpasses the second best performance (RCNN-160 with ) by 1.51 percent. It also should be noted that RCNN-160 has 1.87M parameters, which are about 0.27M larger than those of CSNet-M.

Iv-D Results on the MNIST Dataset

MNIST [5]

is one of the most well known datasets in the field of machine learning. It consists of hand written digits ranging from 0 to 9. There are 60000 training images and 10000 testing images which are

gray-scale images. Only mean subtraction is used to preprocess the dataset. Since MNIST is relatively a simpler dataset compared with CIFAR10, CSNet-S is used in this experiment. The results of performance comparison are shown in Tab. VI. It can be seen that CSNet-S achieves the state-of-the-art performance with a test error of .

Methods MNIST
DropConnect[27] 0.57
FitNet[19] 0.51
NIN[14] 0.47
Maxout[8] 0.45
Highway[23] 0.45
DSN[13] 0.39
RCNN-96[28] 0.31
CSNet-S 0.31
TABLE VI: Classification error (in %) for MNIST using various methods

Iv-E Results on the SVHN Dataset

The SVHN (Street View House Numbers) [39] is a real-world image dataset containing 10 classes representing digits of 0 to 9. There are totally 630,420 color images which are divided into three sets, 73,527 images in training set, 26,032 images in testing set, and 531,131 images in extra set. More than one digit may exist in an image, and the task is to classify the digit located at the center. We followed the training and testing procedure described by Goodfellow et al. [8]. That is, 400 samples per class are randomly selected from the training set, and 200 samples per class are randomly selected from the extra set. These selected data together form the validation set. The remaining images in the training set and extra set compose the training set. The validation set is only used for tuning hyper-parameter selection and not used during training. Since there are large variations among one same kind of digit in SVHN due to the changes of color and brightness, it is much more difficult to recognize digits in SVHN than in MNIST. Therefore, local contrast normalization is used to preprocess the samples. No data augmentation is used in this experiment. To deal with the large variations of digits, we use the CSNet-M in this experiment. The performance comparison with other methods is shown in Tab. VII. It can be seen that CSNet-M obtains a test error test error of , which already improves NIN ( with 1.98M parameters) by 0.45 percent. CSNet-M achieves the second best performance () which is very close the best performance with a test error of .

Methods SVHN
Maxout[8] 2.47
FitNet[19] 2.42
NIN[14] 2.35
DropConnect[27] 1.94
DSN[13] 1.92
RCNN-160[28] 1.80
CSNet-M 1.90
TABLE VII: Classification error (in %) for SVHN using various methods

V Conclusion

In this paper, we have presented a novel CNN structure called CSNet. The core of CSNet is to represent a local patch with one neuron which is obtained by using cascaded subpatch filters. The subpatch filter has two characteristics: (1) the spatial size of the subpatch filter is smaller than that of the input patch, (2) the subpatch filter consists of an (with and ) filter followed by a filter. The role of cascaded subpatch filters can be considered as representing the input patch using a pyramid with the resolution decreasing from to . Due to the large ability of feature representation, the proposed method achieves the state-of-the-art performance.

References