Deep Convolutional Neural Networks with Merge-and-Run Mappings

11/23/2016 ∙ by Liming Zhao, et al. ∙ Zhejiang University Microsoft University of California, San Diego 0

A deep residual network, built by stacking a sequence of residual blocks, is easy to train, because identity mappings skip residual branches and thus improve information flow. To further reduce the training difficulty, we present a simple network architecture, deep merge-and-run neural networks. The novelty lies in a modularized building block, merge-and-run block, which assembles residual branches in parallel through a merge-and-run mapping: Average the inputs of these residual branches (Merge), and add the average to the output of each residual branch as the input of the subsequent residual branch (Run), respectively. We show that the merge-and-run mapping is a linear idempotent function in which the transformation matrix is idempotent, and thus improves information flow, making training easy. In comparison to residual networks, our networks enjoy compelling advantages: they contain much shorter paths, and the width, i.e., the number of channels, is increased. We evaluate the performance on the standard recognition tasks. Our approach demonstrates consistent improvements over ResNets with the comparable setup, and achieves competitive results (e.g., 3.57% testing error on CIFAR-10, 19.00% on CIFAR-100, 1.51% on SVHN).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

fusenet

Deep fusion project of deeply-fused nets, and the study on the connection to ensembling


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep convolutional neural networks, since the breakthrough result in the ImageNet classification challenge 

[13], have been widely studied [30, 27, 7]

. Surprising performances have been achieved in many other computer vision tasks, including object detection 

[5], semantic segmentation [17], edge detection [38], and so on.

Residual networks (ResNets) [7] have been attracting a lot of attentions since it won the ImageNet challenge and various extensions have been studied [39, 32, 40, 1]. The basic unit is a residual block consisting of a residual branch and an identity mapping. Identify mappings introduce short paths from the input to the intermediate layers and from the intermediate layers to the output layers [35, 34], and thus reduce the training difficulty.

(a)        (b)        (c) 

Figure 1: Illustrating the building blocks: (a) Two residual blocks; (b) An inception-like block; (c) A merge-and-run block. (a) corresponds to two blocks in ResNets and assembles two residual branches sequentially while (b) and (c) both assemble the same two residual branches in parallel. (b) and (c) adopt two different skip connections: identity mappings and our proposed merge-and-run mappings. The dot circle denotes the average operation, and the solid circle denotes the sum operation.

In this paper, we are interested in further reducing the training difficulty and present a simple network architecture, called deep merge and run neural networks, which assemble residual branches more effectively. The key point is a novel building block, the merge-and-run block, which assembles residual branches in parallel with a merge-and-run mapping: Average the inputs of these residual branches (Merge), and add the average to the output of each residual branch as the input of the subsequent residual branch (Run), respectively. Figure 1 depicts the architectures by taking two residual branches as an example: (a) two residual blocks that assemble two residual branches sequentially, (b) an inception-like block and (c) a merge and run block both assemble the same two residual branches in parallel.

Obviously, the resulting network contains shorter paths as the parallel assembly of residual branches directly reduces the network depth. We give a straightforward verification: The average length of two residual blocks is , while the average lengths of the corresponding inception-like block and merge-and-run block are and , respectively. Our networks, built by stacking merge-and-run blocks, are less deep and thus easier to train.

We show that the merge-and-run mapping is a linear idempotent function, where the transformation matrix is idempotent. This implies that the information from the early blocks can quickly flow to the later blocks, and the gradient can be quickly back-propagated to the early blocks from the later blocks. This point essentially provides a theoretic counterpart of short paths, showing the training difficulty is reduced.

We further show that merge-and-run blocks are wider than residual blocks. Empirical results validate that for very deep networks, as a way to increase the number of layers, increasing the width is more effective than increasing the depth. Besides, we discuss the generalizability of merge-and-run mappings to other linear idempotent transformations, and the extension to more residual branches.

The empirical results demonstrate that the performances of our networks are superior to the corresponding ResNets with comparable setup on CIFAR-, CIFAR-, SVHN and ImageNet. Our networks achieve competitive results compared to state-of-the-arts (e.g., testing error on CIFAR-, on CIFAR-, on SVHN).

2 Related Works

There have been rapid and great progress of deep neural networks in various aspects, such as optimization techniques [28, 11, 18], initialization schemes [19], regularization strategies [26], activation and pooling functions [6, 3], network architecture [16, 22, 20, 24], and applications. In particular, recently network architecture design has been attracting a lot of attention.

Highway networks [27], residual networks [7, 8], and GoogLeNet [30] are shown to be able to effectively train a very deep (over and even hundreds or thousands) network. The identity mapping or the bypass path are thought as the key factor to make the training of very deep networks easy. Following the ResNet architecture, several variants are developed by modifying the architectures, such as wide residual networks [39], ResNet in ResNet [32], multilevel residual networks [40], multi-residual networks [1], and so on. Another variant, DenseNets [9], links all layers with identity mappings and are able to improve the effectiveness through feature reuse. In addition, optimization techniques, such as stochastic depth [10] for ResNet optimization, are developed.

Deeply-fused networks [35], FractalNet [14], and ensemble view [34] point out that a ResNet and a GoogleNet [30, 31, 29] are a mixture of many dependent networks. Ensemble view [34]

observes that ResNets behave like an exponential ensemble of relatively shallow networks, and points out that introducing short paths helps ResNets to avoid the vanishing gradient problem, which is similar to the analysis in deeply-fused networks 

[35] and FractalNet [14].

The architecture of our approach is closely related to Inception [30] and Inception-ResNet blocks [29], multi-residual networks [1], and ResNeXt [37], which also contain multiple branches in each block. One notable point is that we introduce merge-and-run mappings, which are linear idempotent functions, to improve information flow for building blocks consisting of parallel residual branches.

In comparison to ResNeXts [37] that also assemble residual branches in parallel, our approach adopts parallel assembly to directly reduce the depth and does not modify residual branches, while ResNeXts [37] transform a residual branch to many small residual branches. Compared to Inception [30] and Inception-ResNet blocks [29] that are highly customized, our approach requires less efforts to design and more flexible.

(a)               (b)            (c) 

Figure 2: (a) a deep residual network; (b) a network built by stacking inception-like blocks; (c) our deep merge-and-run neural network built by stacking merge-and-run blocks. The trapezoid shape indicates that down-sampling occurs in the corresponding layer, and the dashed line denotes a projection shortcut as in [7].

3 Deep Merge-and-Run Neural Networks

Layers Output size ResNets DMRNets/DILNets
conv
con1
conv
conv
Classifier 11 average pool, FC, softmax
Table 1:

Network architectures. Inside the brackets are the shape of the residual, inception-like and merge-and-run blocks, and outside the brackets is the number of stacked blocks on a stage. Downsampling is performed in conv2_1, and conv3_1 with stride 2. As done 

[7], we use convolutions to replace identity mappings for ResNets and Inception-like blocks, and perform convolutions before merging in merge-and-run blocks when the widths increase across stages. In each convolution, the input channel number can be known from the preceding layer.

3.1 Architectures

We introduce the architectures by considering a simple realisation, assembling two residual branches in parallel to form the building blocks. We first introduce the building blocks in ResNets, then a straightforward manner to assemble residual branches in parallel, and finally our building blocks.

The three building blocks are illustrated in Figure 1. Examples of the corresponding network structures, ResNets, DILNets (deep inception-like neural networks), and DMRNets (deep merge-and-run neural networks), are illustrated in Figure 2. The descriptions of network structures used in this paper are given in Table 1.

Residual blocks. A residual network is composed of a sequence of residual blocks. Each residual block contains two branches: identity mapping and residual branch. The corresponding function is given as,

(1)

Here, denotes the input of the th residual block. is a transition function, corresponding to the residual branch composed of a few stacked layers.

Inception-like blocks. We assemble two residual branches in parallel and sum up the outputs from the two residual branches and the identity mapping. The functions, corresponding to the th and th residual branches, are as follows,

(2)

where and are the input and the output of the th inception-like block. This structure resembles the building block in the concurrently-developed ResNeXt [37], but the purposes are different: Our purpose is to reduce the depth through assembling residual branches in parallel while the purpose of ResNeXt [37] is to transform a single residual branches to many small residual branches.

Merge-and-run. A merge-and-run block is formed by assembling two residual branches in parallel with a merge-and-run mapping: Average the inputs of two residual branches (Merge), and add the average to the output of each residual branch as the input of the subsequent residual branch (Run), respectively. It is formulated as below,

(3)

where and ( and ) are the inputs (outputs) of two residual branches of the th block. There is a clear difference from inception-like blocks in Equation 2: the inputs of two residual branches are different, and their outputs are also separated.

Figure 3: Comparing the distributions of the path lengths for three networks. Different networks: (avg length std). Left: . Right: .

3.2 Analysis

Information flow improvement. We transform Equation 3 into the matrix form,

(4)

where is an identity matrix and is the dimension of (and ). is the transformation matrix of the merge-and-run mapping.

It is easy to show that like the identity matrix , is an idempotent matrix, i.e., , where is an arbitrary positive integer. Thus, we have111

Similar to identity mappings, the analysis is applicable to the case that the flow is not stopped by nonlinear activation ReLU. This equation is similar to the derivation with identity mappings in 

[8].

(5)

where corresponds to an earlier block222The second term in the right-hand side of Equation 5 does not exist if corresponds to the block right after the input of the whole network.. This implies that during the forward propagation there are quick paths directly sending the input and the outputs of the intermediate residual branches to the later block. We have a similar conclusion for gradient back-propagation. Consequently, merge-and-run mappings improve both forward and backward information flow.

(a)         (b) 

Figure 4: Illustrating the two residual branches shown in (a) are transformed to a single residual branch shown in (b). (a) All convolutions are . (b) The convolutions are and , from narrow () to wide (), and then from wide () back to narrow ().

Shorter paths. All the three networks are mixtures of paths, where a path is defined as a sequence of connected residual branches, identity mappings, and possibly other layers (e.g., the first convolution layer, the FC layer) from the input to the output. Suppose each residual branch contains layers (there are layers for the example shown in Figure 1), and the ResNet, DIRNet and DMRNet contain , , and building blocks, the average lengths (without counting projections in short-cut connections) are , , and , respectively. Figure 3 shows the distributions of path lengths of the three networks. Refer to Table 1 for the details of the network structures.

It is shown in [7, 27] that for very deep networks the training becomes hard and that a shorter (but still very deep) plain network performs even better than a longer plain network333Our empirical results even show that the deepest path in ResNets hurts the training of other paths and thus deteriorates the performance.. According to Figure 3 showing that the lengths of the paths in our proposed network are distributed in the range of lower lengths, the proposed deep merge-and-run network potentially performs better.

Inception-like blocks are wider. We rewrite Equation 2 in a matrix form,

(6)

Considering the two parallel residual branches, i.e., the first term of the right-hand side, we have several observations. (1) The intermediate representation, is -dimensional and wider. (2) The output becomes narrower after multiplication by , and the width is back to . (3) The block is indeed wider except some trivial cases, e.g., each residual branch does not contain nonlinear activations.

Figure 4 presents an example to illustrate that inception-like block is wider. There are two layers in each branch. We have that the two residual branches is equivalent to a single residual branch, also containing two layers: the first layer increases the width from ( in Figure 4) to , and the second layer reduces the width back to . There is no such simple transformation for residual branches with more than two layers, but we have similar observations.

Merge-and-run blocks are much wider. Consider Equation 4, we can see that the widths of the input, the intermediate representation, and the output are all 444In essence, the space is not fully exploited because the convolutional kernel is block-diagonal.. The block is wider than an inception-like block because the outputs of two residual branches in the merge-and-run block are separated and the outputs for the inception-like block are aggregated. The two residual branches are not independent as the merge-and-run mapping adds the input of one residual branch to the output of the other residual branch.

(a)        (b) 

Figure 5: Transform the merge-and-run block shown in (a) to a two-branch block shown in (b). (b) The convolutions are group convolutions. A group convolution contains two () convolutions of : each receives a different -channel input and the two outputs are concatenated as the final output with channels. The width is greater than

. The skip connection (dot line) is a linear transformation, where the transformation matrix of size

is idempotent.

Figure 5 shows that the merge-and-run block is transformed to a two-branch block. The dot line corresponds to the merge-and-run mapping, and now becomes an integrated linear transformation receiving a single

-dimensional vector as the input. The residual branch consists of two group convolutions, each with two partitions. A group convolution is equivalent to a single convolution with the larger convolution kernel, being a block-diagonal matrix with each block corresponding to the kernel of each partition in the group convolution.

4 Experiments

We empirically show the superiority of DILNets and DMRNets, compared with ResNets. We demonstrate the effectiveness of our DMRNets on several benchmark datasets and compare it with the state-of-the-arts.

4.1 Datasets

CIFAR- and CIFAR-. The two datasets are both subsets [12] drawn from the -million tiny image database [33]. CIFAR- consists of colour images in classes, with images per class. There are training images and test images. CIFAR- is like CIFAR-, except that it has classes each containing images. We follow a standard data augmentation scheme widely used for this dataset [15, 7, 9]

: we first zero-pad the images with

pixels on each side, and then randomly crop to produce

images, followed by horizontally mirroring half of the images. We preprocess the images by normalizing the images using the channel means and standard deviations.

SVHN. The SVHN (street view house numbers) dataset consists of digit images of size . There are images as the training set, images as a additional training set, and images as the testing set. Following the common practice [16, 15, 10], we select out samples per class from the training set and samples per class from the additional set, and use the remaining images for training.

4.2 Setup

Networks. We follow ResNets to design our layers: use three stages (conv, conv, conv) of merge-and-run blocks with the number of filter channels being , respectively, and use a Conv-BN-ReLU as a basic layer with kernel size . The image is fed into the first convolutional layer (conv0) with output channels, which then go to the subsequent merge-and-run blocks. In the experiments, we implement our approach by taking two parallel residual branches as an example. For convolutional layers with kernel size , each side of the inputs is zero-padded by one pixel. At the end of the last merge-and-run block, a global average pooling is performed and then a soft-max classifier is attached. All the operations in solid circles in Figures 1 are between BN and ReLU.

Training.

We use the SGD algorithm with the Nesterov momentum to train all the models for

epochs on CIFAR-/CIFAR- and epochs on SVHN, both with a total mini-batch size on two GPUs. The learning rate starts with and is reduced by a factor at the , and fractions of the number of training epochs. Similar to [7, 9], the weight decay is , the momentum is , and the weights are initialized as in [6]. Our implementation is based on MXNet [2].

4.3 Empirical Study

Params. CIFAR- CIFAR- SVHN
ResNets DILNets DMRNets ResNets DILNets DMRNets ResNets DILNets DMRNets
M
M
M
M
M
M
M
M
Table 2: Empirical comparison of DILNets, DMRNets, and ResNets. The average classification error from runs and the standard deviation (mean std.) are reported. Refer to Table 1 for network structure descriptions.

Shorter paths. We study how the performance changes as the average length of the paths changes, based on two kinds of residual networks. They are formed from the same plain network of depth , whose structure is like the one forming ResNets given in Table 1: (i) Each residual branch is of length and corresponds to one stage. There are totally residual blocks. (ii) Each residual branch is of length . There are totally residual blocks (like Figure 2 (a)). The averages of the depths of the paths are both , with counting two projection layers in the shortcut connections.

We vary and record the classification errors for each kind of residual network. Figure 6 shows the curves in terms of the average depth of all the paths vs. classification error over the example dataset CIFAR-. We have the following observations. When the network is not very deep and the average length is small ( for blocks, for blocks555There are more short paths for blocks, which leads to lower testing error than blocks.), the testing error becomes smaller as the average length increases, and when the length is large, the testing error becomes larger as the length increases. This indicates that shorter paths result in the higher accuracy for very deep networks.

Figure 6: Illustrating how the testing errors of residual networks change as the average path length increases. The results are reported on CIFAR-.

Comparison with ResNets. We compare DILNets and DMRNets, and the baseline ResNets algorithm. They are formed with the same number of layers, and each block in a DILNet and a DMRNet corresponds to two residual blocks in a ResNet. Table 1 depicts the network structures.

The comparison on CIFAR- is given in Table 2. One can see that compared with ResNets, DILNets and DMRNets consistently perform better, and DMRNets perform the best. The superiority of DILNets over ResNets stems from the less long paths and greater width. The additional advantages of a DMRNet are much greater width than a DILNet.

The comparisons over CIFAR- and SVHN shown in Table 2 are consistent. One exception is that on CIFAR- the ResNet of depth () performs better than DILNet and on SVHN the ResNet of depth performs better than the DILNet and DMRNet. The reason might be that the paths in the DILNet and DMRNet are not very long and that too many short paths lower down the performance and that for networks of such a depth, the benefit from increasing the width is less than the benefit from increasing the depth.

(a) CIFAR- ()
(b) CIFAR- ()
(c) SVHN ()
(d) CIFAR- ()
(e) CIFAR- ()
(f) SVHN ()
Figure 7: Comparing the optimization of ResNets and the DMRNets with the same number of layers/parameters. The vertical axis corresponds to training losses and testing errors, and the horizontal axis corresponds to #epochs.

Convergence curves. Figure 7 shows the convergence curves of ResNets and DMRNets over CIFAR-, CIFAR-, and SVHN. We show training losses instead of training errors because the training errors in CIFAR- almost reach zero at the convergence and are not distinguishable. One can see that the testing errors of DMRNets are smaller than ResNets and that the training losses are also smaller during the optimization process, suggesting that our gains are not from regularization but from richer representation.

4.4 Comparison with State-of-the-Arts

Depth Params. CIFAR- CIFAR- SVHN
Network in Network [16] - - -
All-CNN [25] - - -
FitNet [21] - -
Deeply-Supervised Nets [15] - -
Swapout [23] M -
M -
Highway [27] - - -
DFN [35] M -
M -
FractalNet [14] M
W/ dropout & droppath M
ResNet [7] M - -
ResNet [10] M
ResNet (pre-activation) [8] M -
M -
ResNet W/ stochastic depth [10] M
M - -
Wide ResNet [39] M -
M -
W/ dropout M - -
RiR [32] M -
Multi-ResNet [1] M -
M - -
DenseNet [9] M
DMRNet (ours) M
DMRNet-Wide (ours) M blue
DMRNet-Wide (ours) M blue blue
Table 3: Classification error comparison with state-of-the-arts. The results of DenseNets are based on the networks without bottlenecks. The DMRNet-Wide is the wide version of a DMRNet, wider, i.e., the widths of the threes stages are , , and , respectively.

The comparison is reported in Table 3. We report the results of DMRNets since it is superior to DILNets. Refer to Table 1 for network architecture descriptions. We also report the results from the wide DMRNets (denoted by DMRNet-Wide), wider, i.e., the widths of the threes stages are , , and , respectively. We mark the results that outperform existing state-of-the-arts in bold and the best results in blue.

One can see that the DMRNet-Wide of depth outperforms existing state-of-the-art results and achieves the best results on CIFAR- and CIFAR-. Compared with the second best approach DenseNet666 Both DMRNets and DenseNets do not use bottlenecks. DenseNets with bottleneck layers would perform better. We will combine bottleneck layers into our approach as our future work to further improve the accuracy. that includes more parameters (M), our network includes only M parameters. DMRNet-Wide (depth ) is very competitive: outperform all existing state-of-the-art results on SVHN. It contains only M parameters, almost half of the parameters (M) of the competitive DenseNet. These results show that our networks are parameter-efficient.

Compared with the FractalNet with depth , DMRNets-Wide with depths are much deeper and contain fewer parameters (M, M vs. M). Our networks achieve superior performances over all the three datasets. This also shows that because merge-and-run mappings improve information flow for both forward and backward propagation, our networks are less difficult to train even though our networks are much deeper.

4.5 ImageNet Classification

We compare our DMRNet against the ResNet on the ImageNet classification dataset [4], which consists of classes of images. The models are trained on the million training images, and evaluated on the validation images.

Network architecture. We compare the results of ResNet- with layers. The ResNet- [7] (M) is equipped with stages of residual blocks with bottleneck layers, and the numbers of blocks in the four stages are , respectively. We form our DMRNet by replacing the residual blocks with our merge-and-run blocks and setting the numbers of blocks in the four stages to , resulting in our DMRNet with depth (M).

Optimization. We follow [7]

and use SGD to train the two models using the same hyperparameters (weight decay

, and momentum ) with [7]. The mini-batch size is , and we use GPUs ( samples per GPU). We adopt the same data augmentation as in [7]. We train the models for epochs, and start from a learning rate of , and then divide it by every epochs which are the same as the learning rate changing in [7]. We evaluate on the single center crop from an image whose shorter side is .

Results. Table 4 shows the results of our approach and our MXNet implementation of ResNet-, and the results of ResNet- from [7]. We can see that our approach performs the best in terms of top- validation error: our approach gets gain, compared with the results of ResNet- from our implementation, and gain compared with the result from [7].

The training and validation error curves of ResNet- and our DMRNet are given in Figure 8. It can be observed that our approach performs better for both training errors and validation errors, which also suggests that the gains are not from regularization but from richer representation. For example, the top- validation error of our approach is lower about than that of the ResNet from the th epoch to the th epoch.

We notice that the results of our implemented ResNet on MXNet and the results from [7] are different. We want to point out that the settings are the same with [7]. We think that the difference might be from the MXNet platform, or there might be some other untested issues pointed by the authors of ResNets777https://github.com/KaimingHe/deep-residual-networks.

Figure 8: Training error and validation error curves of ResNet- (M) and DFN-MR (M) with the same optimization setting on ImageNet. We report the (top- error) results for training and single-crop validation. It can be observed that our approach performs better for both training errors and validation errors.
ResNet- [7] ResNet- DMRNet
#parameters M M
Top- validation error
Top- validation error
Top- training error
Top- training error -
Table 4: The validation (single center crop) and training errors () of ResNet- (M) and our DMRNet (M) on ImageNet.

5 Discussions

Merge-and-run mappings for branches. The merge-and-run mapping studied in this paper is about two residual branches. It can be easily extended to more () branches, and accordingly merge-and-run mappings become a linear transformation where the corresponding transformation matrix is of blocks, with each block being .

Idempotent mappings. A merge-and-run mapping is a linear idempotent mapping, which is a linear transformation with the transformation matrix being idempotent, . Other idempotent mappings can also be applied to improve information flow. For examples, the identity matrix is also idempotent and can be an alternative to the merge-and-run mappings. Compared with identity mappings, an additional advantage is that merge-and-run mappings introduce interactions between residual branches.

We conducted experiments using a simple idempotent mapping, , for which there is no interaction between the two residual branches and accordingly the resulting network consists of two ResNets that are separate except only sharing the first convolution layer and the last FC layer. We also compare the performances of the two schemes without sharing those two layers. The overall superior results of our approach, from Table 5, show that the interactions introduced by merge-and-run mappings are helpful.

CIFAR- CIFAR-
L Identity Merge-and-run Identity Merge-and-run
w/ sharing
w/o sharing
Table 5: Comparison between merge-and-run mappings and identity mappings. Sharing = share the first conv. and the last FC.

Our merge-and-run mapping is complementary to other design patterns, such as dense connection in DenseNet [9], bottleneck design, and so on. As our future work, we will study the integration with other design patterns.

Deeper or wider. Numerous studies have been conducted on going deeper, learning very deep networks, even of depth . Our work can be regarded as a way to going wider and less deep, which is also discussed in [36, 39]. The manner of increasing the width in our approach is different from Inception [30], where the outputs of the branches are concatenated for width increase and then a convolution/pooling layer for each branch in the subsequent Inception block is conducted but for width decrease. Our merge-and-run mapping suggests a novel and cheap way of increasing the width.

6 Conclusions

In this paper, we propose deep merge-and-run neural networks, which improves residual networks by assembling residual branches in parallel with merge-and-run mappings for further reducing the training difficulty. The superior performance stems from several benefits: Information flow is improved, the paths are shorter, and the width is increased.

References