ResNet or DenseNet? Introducing Dense Shortcuts to ResNet

10/23/2020 ∙ by Chaoning Zhang, et al. ∙ 0

ResNet or DenseNet? Nowadays, most deep learning based approaches are implemented with seminal backbone networks, among them the two arguably most famous ones are ResNet and DenseNet. Despite their competitive performance and overwhelming popularity, inherent drawbacks exist for both of them. For ResNet, the identity shortcut that stabilizes training also limits its representation capacity, while DenseNet has a higher capacity with multi-layer feature concatenation. However, the dense concatenation causes a new problem of requiring high GPU memory and more training time. Partially due to this, it is not a trivial choice between ResNet and DenseNet. This paper provides a unified perspective of dense summation to analyze them, which facilitates a better understanding of their core difference. We further propose dense weighted normalized shortcuts as a solution to the dilemma between them. Our proposed dense shortcut inherits the design philosophy of simple design in ResNet and DenseNet. On several benchmark datasets, the experimental results show that the proposed DSNet achieves significantly better results than ResNet, and achieves comparable performance as DenseNet but requiring fewer computation resources.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep neural networks (DNNs) have achieved state-of-the-art performance in numerous computer vision tasks 

[14, 18, 12, 31, 11, 25, 50, 49, 9], and the interpretation of DNNs has also be investigated from the lens of visualization [46, 27, 28] as well as robustness [39, 13, 4, 47, 48, 3, 2]. AlexNet [21] and VGGNet [34] were the pioneering works that demonstrated the potential of DNNs. Inspired by the success of these seminal works, the research focus of the community has shifted from feature engineering [26, 7]

to network design engineering and numerous new network architectures have come out to boost the performance of DNNs. ResNet, reusing preceding features with the identity shortcut had large success and achieved state-of-the-art performance on several benchmark datasets, such as ImageNet 

[8]

and COCO detection dataset 

[24]. Compared to the Inception family of architectures, including GoogleNet [37] and Inception-V3 [38]

, ResNet family shows better generalization, which implies that the learned features can be used in transfer learning with better efficiency 

[45].

One of the reasons that make ResNet exceptionally popular is the simple design strategy which introduces only one identity shortcut. Despite its large success, the weakness of the identity shortcut has been analyzed by follow-up works. The identity shortcut skips the residual blocks to preserve features, as a result limiting the representation power of the network [45, 50]. The drawback of ResNets is that it causes the collapsing domain problem which reduces the learning capacity of the network [29] and  [50] proposed to mitigate it with non-linear shortcuts.

Another simple yet effective technique, dense concatenation, has been proposed in DenseNet [18] to facilitate training deep networks. The DenseNet adopting dense concatenation to all subsequent layers to avoid using direct summation, preserves the features in preceding layers. DenseNet has been shown to have better feature use efficiency, outperforming ResNet with fewer parameters [18]. Nonetheless, DenseNet requires heavy GPU memory due to concatenation operations. The memory issue can be mitigated by memory-efficient implementation introduced in [30]. However, such an implementation is more complex from the engineering perspective, and it also further increases the training time of DenseNet by  [30]. The main reason that DenseNet requires more training time is that DenseNet uses many small convolutions in the network, which runs slower on GPU than compact large convolution with the same number of GFLOPS. In short, there is a dilemma in the choice between ResNet and DenseNet for broad applications in terms of the performance and GPU resources.

This paper proposes a dense normalized shortcut as an alternative dense connection technique to mitigate this dilemma. The proposed dense normalized shortcut outperforms ResNet [14] with a significant margin, with negligible parameter overhead, and it achieves comparable performance as DenseNet but requiring less computation. Our proposed network structure adopts the same backbone (convolutional block design) as ResNet and replaces identity shortcut with our dense normalized shortcut. Our approach uses neither identity shortcut nor dense concatenation. From this perspective, this work is most similar to FractalNet [22] whose structural layouts are precisely fractal. To our best knowledge, FractalNet is the only work that explored to train deep networks using neither identity shortcut nor dense concatenation, however, its performance is less favourable. The non-linear shortcuts introduced in [50] leads to performance boost. However, without idenity shortcut, its performance decreases when the network is very deep.

Overall, this paper provides one unified perspective of dense summation to analyze ResNet and DenseNet, which facilitates a better understanding of their core differences. Based on this perspective, we propose dense weighted normalized shortcuts to alleviate the drawbacks of the existing two dense connection techniques. We evaluate the proposed DSNet on several benchmark datasets and the results show that it outperforms ResNet by a significant margin and also with fewer parameters achieves comparable (or slightly better) performance as DenseNet but requiring fewer computation resources.

2 Related works

Deep CNN network design has become a very hot research topic and numerous techniques contributed to the success of deep learning in the computer vision field. Those techniques can be roughly divided into two categories: micro-module design and macro-architecture design.

2.1 Micro-module design

Micro-modules, such as normalization modules [20], attention modules [16], group convolutions [51], and bottleneck design [14], can be inserted into existing macro-architecture networks to improve the performance. Among them, normalization techniques [20]

are the most widely used and by default almost all the deep learning models adopt batch normalization 

[20] to improve the performance and speed up the convergence. The random sampling in batch normalization also contributes to improving the generalization capability of the model. Recently, it has been shown in [5] that batch normalization might increase adversarial vulnerability and updating the moving average statistics is found to improve the model corruption robustness. Alternative normalization techniques, such as weight normalization [32], instance normalization [40] or layer normalization [1], have been investigated for addressing the dependency between samples during training. Those alternative techniques can also speed up the convergence, however, often do not provide as good performance as batch normalization. Both instance normalization and layer normalization can be seen as a special case of the later proposed group normalization [43]. Similar to instance normalization and layer normalization, group normalization performs the normalization along the channel direction instead of batch direction, enabling it to work effectively in memory-intensive applications where only a small number of samples can be processed in one batch [43]. In previous works, the normalization techniques have been mainly used in the residual path but our work explores the effect of the normalization techniques in the dense shortcut path. Normalization in the non-linear shortcut has also been investigated in [50]. We explore the normalization techniques in the proposed dense shortcut path mainly due to their lightweight property.

2.2 Macro-architecture design

On the other hand, network micro-architecture design aims to get backbone network structures that can be used to improve the performance across different tasks, for which the evaluation benchmark is the classification accuracy on ImageNet and CIFAR dataset, etc. Famous macro-architectures include AlexNet 

[21], VGGNet [34], GoogleNet and its variants [37, 38], ResNet [14], and DenseNet [18]. Being the pioneering networks in the deep learning field, AlexNet, VGGNet and GoogleNet are still widely used by many researchers to design a prototype network for their applications. However, there is a trend in the research community that ResNet and DenseNet have become more favourable choices due to their competitive performance and simple designs. ResNet has two famous variants: WideResnet [45] and ResNext [44], which explore the dimensions of width and cardinality respectively. The original ResNet has demonstrated that identity shortcut can contribute to stabilizing the training of deep networks; however, the performance decreases when the network becomes extremely deep (for example, more than layers). Preactivation-ResNet [15] solved this problem by re-ordering activations in the ResNet module. The performance gain of preactivation-ResNet over original ResNet could only be observed in extremely deep networks [15]. Later, it has been found that the extreme depth is unnecessary since it provides worse performance compared with the performance by increasing the width of the network with a similar number of parameters [45]. ResNext further explored the influence of cardinality, which is more effective than increasing either depth or width. Despite some small differences between ResNet and its two variants, WideResnet and ResNext, one thing in common is that they all use identity shortcut. Instead, DenseNet effectively uses the concatenation technique without using the identity shortcut. Moreover, CondenseNet [17], a variant of DenseNet, exploits the power of dense connection in a more extreme way. One of its most significant differences from the DenseNet is that layers with different resolution feature-maps are also densely connected. Furthermore, Dual path network [6] and mixed link network [41] integrate ResNet and DenseNet into one network by using both identity shortcut and dense concatenation. Our work differentiates from previous works in that our approach adopts neither identity shortcut nor dense concatenation. FractalNet [22] also explored to train ultra-deep networks relying on neither identity shortcut nor dense concatenation; however, it provides less favourable performance. In the experiment section, we will show that our proposed approach outperforms both ResNet and DenseNet.

3 Proposed approach

3.1 Background: Dense connection exists in ResNet and DenseNet

What is the difference between ResNet and DenseNet? As the name suggests, it seems that the difference lies in that ResNet only uses one preceding feature-map, while DenseNet uses features of all the preceding convolutional blocks. The shared philosophy that unifies ResNet and DenseNet is that they both connect to the feature-maps of all preceding convolutional blocks [18]. A similar finding has been revealed in [6, 41]. That is to say, dense connection exists in both ResNet and DenseNet [6, 41]. For a typical convolution in DNNs, we formulate it as:

(1)

where and indicate the current feature-map and previous feature-map respectively; “” indicates convolution operation and indicates the convolution weight. For simplicity, we do not take the bias term in the convolution into account. In VGG style network [34], is only the preceding feature-map. In DenseNet, however, connects the feature-maps of all preceding convolutional blocks as illustrated in Figure 1 (b):

(2)

where represents each of the preceding feature-maps and “” represents the operation of concatenation. Replacing with above , we get

(3)

from which it is obvious that DenseNet uses the feature-maps of all preceding convolutional block outputs. In ResNet

(4)

and it may seem that only reuses preceding feature-map . However, as shown in Figure 1 (a) we can recursively extend this function and get

(5)

Likewise, we insert the above to Eq. 1, for ResNet we get

(6)

from which it is clear that ResNet also connects to the feature-maps of all preceding convolutional blocks [18, 6, 41]. The difference between ResNet and DenseNet is that ResNet adopts summation to connect all preceding feature-maps while DenseNet concatenates all of them [41].

Figure 1: (a) ResNet and (b) DenseNet.

3.2 Unified perspective with dense summation

As analyzed above, DenseNet is different from ResNet because they adopt different dense connection methods: summation vs concatenation. Here, we demonstrate that dense concatenation before convolution operation can be equivalent to dense summation after convolution, thus Eq. 3 can be reformulated as:

(7)

where simply dividing one convolution weight into multiple small convolution weights , each having the same channel size as corresponding . Note that indicates the first convolution in the convolutional block instead of the whole convolution block. Thus, is the feature-map after the first convolution in the convolutional block. Overall, the above equivalence is illustrated in Figure 2, where and are the number of input channels and output channels respectively. Similar observation has been revealed in [44]. For ResNet, we transform Eq. 6 into

(8)

We further summarize Eq. 7 and Eq. 8 as follows.

(9)

From Eq. 9 we observe that both ResNet and DenseNet have dense summation of . This interesting observation provides a more unified perspective to perceive ResNet and DenseNet in terms of their resemblance to each other. However, the main purpose of formulating this “unified perspective” is to better understand their core differences for demonstrating their pros and cons. By comparing the two formulas for DenseNet and ResNet in Eq. 9, we find that the core difference lies in that the convolution weight in ResNet is the same for every preceding layer output while is different for in DenseNet. This core difference results in other differences in practical use. The input channel in increases with the increase of and it is normally larger than that in of ResNet when becomes large. Due to the concatenation feature, in is normally very small but with more layers. Thus, DenseNet in practical use often requires more training time. Moreover, the concatenation nature resulting in large input channel also requires more GPU memory. However, the merit of DenseNet design is that it exhibits more flexibility of using previous feature-maps because each is different.

It is worth mentioning that Eq. 9 does not reflect their practical implementation. is much faster than on GPUs; and is much more efficient than . Our above analysis only demonstrates the theoretical connection between ResNet and DenseNet.

Figure 2: Equivalence of Dense concatenation before convolution and Dense summation after convolution.

3.3 Dense shortcut and DSNet

With the above unified perspective, the core difference between ResNet and DenseNet is revealed as whether the convolution parameters are shared for each preceding output. It then results in superior performance of DenseNet with the disadvantage of requiring more GPU resources. The difference originates from the adoption of different dense connection techniques, identity shortcut and dense concatenation. In this paper we propose one alternative dense connection that is motivated to alleviate their drawbacks. It introduces flexibility of using preceding feature-maps while still using the same for each preceding feature-map. Benchmarking ResNet formula in Eq. 9, we propose

(10)

which is equivalent to

(11)

where “” indicates dense shortcut. It refers to dense weighted normalized shortcut consisting of normalization and channel-wise weight. Specifically, , where represents channel-wise weight and and indicates normalization operation. In this work, the term “dense shortcut” is equivalent to “dense weighted normalized shortcut” unless otherwise specified. Eq. 11 is illustrated in Figure 3 (a); however, we conjecture that the feature-map contains more useful feature in the aggregation output than its corresponding single convolutional block output , thus by replacing with we propose another variant:

(12)

which is illustrated in Figure 3 (b). The conjecture that the feature of aggregation output is more meaningful than is supported by the experimental results (see Table 1). Therefore we mainly adopt variant (b) in Figure 3 in this study. We term the proposed network adopting DS shortcut DSNet. DSNet adopts the same network backbone (convolutional block itself and block design) as ResNet [14]. The ResNet backbone is tailored for the identity shortcut, not for our proposed shortcut. It is conceivable that redesigning the backbone structure might further improve the performance of our DSNet, but this is beyond the scope of this work. The only difference between our DSNet and ResNet [14] is to replace identity shortcut with the proposed dense shortcut (ie., the dense weighted normalized shortcut).

For introducing dense shortcuts in ResNet, one naive approach is to just densely connect all preceding feature-maps by replacing single identity shortcut in Eq. 4 with dense identity shortcut, and we get

(13)

which can be recursively extended as:

(14)

Comparing it with Eq. 5, we find that dense identity shortcut is equivalent to add extra constant at the end of each convolutional block. Such design denoted ResNet50-dense in Table 1 does not achieve better performance than original ResNet50. This demonstrates the failure of naive dense identity shortcut. Next, we will illustrate our motivation for the design of our DS shortcut.

The motivation of using normalization is to normalize all the preceding features into a similar scale to avoid any preceding feature to dominate the whole summation and facilitate the training. Note that no affine transformation is applied in the normalization process. The weighted summation is to provide the network freedom to assign proper weight to each normalized feature-map depending on its significance. It is cumbersome to manually decide the weights for each one, thus these weights are set to learnable parameters. Moreover, we empirically find that inserting weighted normalized shortcut within the convolutional block on the 33 convolution can also contribute to the performance improvement, and since it adds almost little computation burden, it is worth including it. We term it DS2Net.

Figure 3: Proposed DSNet adopting dense (weighted normalized) shortcut. (a) Densely connected to , (b) Densely connected to .

3.4 Ablation study

We adopt the widely used ResNet50 backbone to do ablation studies. The test is conducted on CIFAR100 and the results are available in Table 1. Note that the width of the network is set to 0.25 times the width of ResNet in [14] to save computation resources. We have two major observations from Table 1. First, normalization techniques are important to improve performance. Specifically, group normalization (GN) outperforms batch normalization (BN) by a visible margin. Instance normalization (IN) and layer normalization (LN) are two special cases of GN. LN performs slightly inferior to GN, and IN performs the worst. Second, the weighted parameters are also critical to improve performance. In particular, we empirically find that adopting channel-wise weight is crucial for the performance gain which indicates that different channels should have different weights. Overall, these two observations support the intuitions of adopting weighted normalized shortcuts. Other observations include the superiority of DS2Net to DSNet and the inferiority of DSNet-a to DSNet.

Architecture Top-1 (%)
ResNet50 (0.25) [14] 26.13
ResNet50-dense (0.25) 26.33
DSNet50-a (GN + weight) (0.25) 25.23
DSNet50 (GN + weight) (0.25) 24.12
DS2Net50 (GN + weight) (0.25) 23.72
DSNet50 (LN + weight) (0.2) 24.43
DSNet50 (BN + weight) (0.25) 24.90
DSNet50 (IN + weight) (0.25) 28.11
DSNet50 (None + weight) (0.25) 26.26
DSNet50 (GN + no weight) (0.25) 25.85
Table 1: Classification error (%) on CIFAR-100 validation dataset for ablation study, “-a” indicates structure (a) in Figure 3, others by default adopts structure (b) in Figure 3.
Architecture CIFAR-100 CIFAR-10
ResNet50 (1) [14] 21.43 4.82
DSNet50 (1) 19.95 4.54
DS2Net50 (1) 19.00 4.33
ResNet50 (0.25) 26.13 6.65
DSNet50 (0.25) 24.12 5.95
DS2Net50 (0.25) 23.72 5.77
ResNet101 (0.25) 25.00 6.00
DSNet101 (0.25) 23.94 5.82
DS2Net101 (0.25) 23.61 5.68
Table 2: Classification error (%) on CIFAR-100 and CIFAR-10 validation dataset for different widths and depths.

4 Experimental Results and Analysis

In this section, we conduct experiments across a range of datasets to evaluate the proposed dense shortcut.

4.1 CIFAR experiments

We first evaluate it on CIFAR datasets. Both CIFAR-100 and CIFAR-10 contain 60,000 colour images with the resolution of 32

32. Totally 50,000 images are used as the training dataset and the remaining 10,000 images as the validation dataset. For the data augmentation, we adopt widely adopted cropping with 4-pixel padding and horizontal flipping. The preprocessing is done with the normalization through the training dataset mean and standard deviations values. For all CIFAR experiments, by default, we set the weight decay to 0.0005 and train for 64k iterations and the initial learning rate is set to 0.1 which is then divided by 10 at 32k and 48k iterations respectively similar to 

[14]. Since we adopt the same backbone as ResNet and replace the identity shortcut with our dense shortcut, we mainly compare our results with the original ResNets [14].

The results in Table 2 show that our proposed approach outperforms ResNet by a significant margin. On CIFAR-100, DSNet50 (0.25) outperforms ResNet50 (0.25) by a margin of 2%. DS2Net further improves the performance of DSNet by a visible margin. The same trend can be observed for both wider () and deeper networks (depth 101). CIFAR-10 mirrors the story and for simplicity in the remainder of the paper, we only report the result for CIFAR-100. To further evaluate the robustness of the proposed DSNet to different depths and widths, we conduct extra experiments and the results are available in Table 3 and Table 4 respectively.

Depth block design ResNet DSNet DS2Net
26 [2, 2, 2, 2] 27.74 26.80 26.30
38 [3, 3, 3, 3] 27.26 25.59 25.30
50 [3, 4, 6, 3] 26.13 24.12 23.72
77 [3, 4, 15, 3] 25.32 23.95 23.55
101 [3, 4, 23, 3] 25.01 23.63 23.40
Table 3: Classification error (%) on CIFAR-100 validation dataset for different depths with the width 0.25.
Width ResNet50 DSNet50 DS2Net50
0.25 26.13 24.12 23.72
0.25(WRN) 23.51 22.00 21.58
0.5 23.38 21.59 21.02
1.0 21.43 19.95 19.00
Table 4: Classification error (%) on CIFAR-100 validation dataset for different widths with ResNet as the backbone.

For the depth exploration, we set the width to and for the width exploration, we set the depth to 50 layers. We observe that DSNet consistently outperforms ResNet over a wide range of depth and widths. Since it has been shown by [45] that increasing the width of the network is more effective than increasing the depth to improve the performance, in the following exploration, we always choose the depth to be 50 layers. Furthermore, we evaluate the proposed dense shortcut on one famous variant of Resnet: ResNext. The results are available in Table 5. The results show that a similar trend has been observed as the original ResNet. Note that ResNext has a similar number of parameters as ResNet, while wide ResNet has almost three times more parameters and GFLOPS. Since the wide ResNets essentially adopts the same structure as ResNet and the small difference is only doubling the Conv 33 feature-maps, to avoid redundancy we only report one case in Table 4 and do not perform more experiments on this structure. Even though ResNeXt is a well-optimized structure, our proposed approach can further improve its performance with a significant margin. Interestingly, DS2NeXt (0.5) can even outperform ResNeXt (1) with a margin of 0.66%. This is quite surprising because ResNeXt (1) has around four times more parameters and GFLOPS than DS2NeXt (0.5). Note that the number of parameters and GFLOPS increase linearly with the increase of depth but quadratically with the increase of width. We further compare our performance on CIFAR-100 with the performance reported in previous works. WRN-28-10 does not adopt the bottleneck structure, thus DS2WRN-28-10 is not applicable (the shortcut within the Conv block can only be inserted into the Conv 33 in the bottleneck for the dimension to match). From Table 6 we observe that the networks that do not use dense connection perform much worse than those adopting dense connection. FractalNet adopting neither identity shortcut nor dense concatenation but with careful engineering design performs much better than other networks without dense connection; however, the performance is still not comparable to ResNet variants or DenseNets. Similar to FractalNet our proposed DSNet adopts neither identity shortcut nor dense concatenation, but it outperforms both ResNets (including WRN and ResNeXt) and DenseNets. The results show that dense weight normalized shortcuts constitute a competitive dense connection technique.

Width ResNeXt50 DSNeXt50 DS2NeXt50
0.25 23.86 22.98 22.58
0.5 21.16 19.95 19.51
1.0 20.17 18.58 18.24
Table 5: Classification error (%) on CIFAR-100 validation dataset for different widths with ResNeXt as the backbone.
Architecture params Top-1 (%)
ALL-CNN [35] - 33.71
Deeply supervised Net [23] - 34.57
HighWay Network [36] - 32.39
FractalNet [22] 38.6M 23.30
with dropout [22] 38.6.M 23.73
ResNet [14] 1.7M 27.22
ResNet with stochastic depth [19] 1.7M 24.58
Preacitvation ResNet [15] 10.2M 22.71
DenseNet( = 24) [18] 27.2M 19.25
DenseNet-BC ( = 24) [18] 15.3M 17.60
DenseNet-BC ( = 40) [18] 25.6M 17.18
WRN-28-10 [45] 36.5M 19.25
with dropout [45] 36.5M 18.85
ResNeXt-29, 864 [44] 34.4M 17.77
ResNeXt-29, 1664 [44] 68.1M 17.31
DSWRN-28-10 36.5M 18.33
DSNeXt-29, 664 34.4M 16.85
DS2NeXt-29, 664 34.4M 16.39
Table 6: Classification error (%) on CIFAR-100 validation dataset.

4.2 ImageNet experiments

ImageNet is the benchmark dataset for classification tasks to evaluate and compare different approaches. Our implementation details follow ResNet [14]. Specifically, we adopt the commonly used random cropping with scale and aspect ratio augmentation for training and adopt SGD as the optimizer. For typical training on ImageNet, 8 GPUs are used and the batch size is set to 256 [14]

. We used 4 GPUs to train the proposed DSNet and the batch size is set to 128 (32 per GPU). Accordingly, taking linear scaling rule into account, we set the initial learning rate to 0.05 instead of the commonly used 0.1. We train the network for 100 epochs and the learning rate is divided by 10 after every 30 epochs. We report the single-crop classification errors on the validation dataset with the input image size of

. Both top-1 and top-5 errors are reported and the results are available in Table 7.

Architecture Params Top-1 (%) Top-5 (%)
ResNet50 [14] 25.6M 24.01 7.02
ResNet101 [14] 44.6M 22.44 6.21
ResNet152 [14] 60.2M 22.16 6.16
WRN-50-2-bottleneck [45] 69.8M 21.9 6.03
ResNeXt-50, 324 [45] 25M 22.2 -
SE-ResNet50 [16] 28.1M 23.29 6.62
CBAM-ResNet50 [42] 28.1M 22.66 6.31
b-RGSNet50 [50] 25.6M 22.68 6.42
Res-RGSNet50 [50] 25.6M 22.21 5.99
DenseNet201 [18] 20M 22.58 6.34
DenseNet264 [18] 33.3M 22.15 6.12
Res2Net [10] 33.3M 22.01 6.15
DSNet50 25.6M 22.49 6.29
DS2Net50 25.6M 22.03 5.93
DS2Res2Net50 25.6M 21.61 5.83
Table 7: Classification top-1 error (%) on ImageNet validation dataset.

We note that the proposed DS2Net50 can achieve significantly better performance than the original ResNet50. Somewhat surprisingly, DS2Net50 can outperform SE-Net[16] (which won the first place in ILSVRC 2017 challenge) as well as CBAM with improved attention module by a relatively large margin. DS2Net50 can achieve equivalent (if not better) performance with that of much deeper ResNet152. Compared with WRN-50-2-bottleneck, DS2Net achieves slightly worse result with the top-1 error metric and slightly better performance with the top-5 error metric. Note that WRN-50-2 has almost three times more parameters and GFLOPS than DS2Net. DS2Net-50 achieves slightly better performance than ResNeXt-50, 324 and Res-RGSNet50. It is claimed by [18] that DSNet201 can achieve equivalent performance with ResNet101 which has twice more parameters and computation GFLOPS. Even though DS2Net with a smaller number of parameters or GFLOPS outperforms DenseNet264, we do not aim to argue it as the main merit of DS2Net over DenseNet. Instead, the main advantage of DS2Net over DenseNet is that it is more time-efficient in practice with GPU implementation. In short, DSNet shows equivalent or better performance with DenseNet but it avoids the drawback of DenseNet. Our proposed DS2Net-50 also achieves comparable performance as the recent Rese2Net-50 [10]. By applying the dense shortcuts to Res2Net, we further improve the performance by 0.40% margin, which is not trivial considering Res2Net is already a very well designed architecture. Compared with ResNet50, DS2Net50 improves the performance of ResNet50 by a large margin. The number of the added weighted parameters is less than 0.15% of the original number of parameters and only a small computation overhead is added because it still adopts the same backbone structure as the original ResNet and only light operations are needed for the added shortcut.
The training curve is shown in Figure 4. We observe that the proposed dense weighted normalized shortcut is also beneficial to speed up the convergence. It is also important to note that the training error of DS2Net is much smaller.

Figure 4: Training curves on ImageNet.
Figure 5: Grad-CAM [33] visualization results of ResNet50 (second row), DSNet50 (third row) and DS2Net50 (llast row).

4.3 COCO detection dataset experiments

To evaluate the dataset generalization capability of the proposed DSNet, we further evaluate it on MS COCO 2014 detection dataset [24]. Faster-RCNN [31] is chosen as the detection method. The network is pre-trained on ImageNet and then finetuned on the COCO dataset with 5 epochs for fast performance validation. The results are available in Table 8. On COCO detection dataset, our proposed DSNet achieves better performance than ResNet50.

Backbone mAP.5 mAP.75 mAP [.5, .95]
ResNet50 51.3 33.6 31.4
DSNet50 54.2 36.0 33.7
DS2Net50 54.3 36.2 33.7
Table 8: mAP (%) on MS COCO validation dataset.

4.4 Visualization with Grad-CAM

We apply the widely used Grad-CAM [33] to ResNet50 [14] and our proposed DSNet/DS2Net, on the images from the ImageNet validation set. The visualization results are available in Figure 5. Grad-CAM calculates the gradients concerning a certain class, thus Grad-CAM result shows the attended regions in the image. We observe that the DSNet attends more on the objects and have a sharper focus (i.e. attention) than ResNet50. No obvious difference is observed between DSNet50 and DS2Net50. More qualitative results are available in the supplementary material.

4.5 Implementation design and memory/speed test

The straightforward implementation of dense normalized shortcut is to do normalization with affine transformation in every shortcut. This will cause unnecessary computation and slightly more parameter overhead burden. First, the normalization process is shared for a certain preceding layer feature. Second, the affine transformation by default has both scale and bias, while the summation of dense bias is mathematically redundant. In our implementation, for every aggregation output , we only perform the normalization process once which is then shared by all the dense shortcuts linked to it. Thus, for the dense shortcut path, the operation is only to multiply the shared normalized feature by the corresponding weight, which is very light. This implementation choice does not influence the performance but requires less computation time.

Architecture Top-1 (%) memory (MB) time (s)
ResNet50 [14] 24.01 3929 0.31
ResNet152 [14] 22.16 7095 0.63
DenseNet264 [18] 22.15 9981 0.60
DSNet50 22.49 4777 0.37
DS2Net50 22.03 5133 0.39
Table 9: GPU memory and training time on ImageNet; memory indicates that per GPU and time indicates that per iteration.

With the above implementation, we measure the ImageNet training memory and speed on the same machine (equipped with four 1080Ti GPUs) with the hyperparameters specified above. We report consumed memory per GPU and computation time per iteration in Table 

9. DS2Net (with a smaller number of parameters as shown in Table 7) performs relatively better than ResNet152 and Dense264 with the advantage for both memory and speed. Compared with ResNet50, the increase of memory and computation for DSNet/DS2Net is marginal.

5 Conclusions

We provide a unified perspective of dense summation to facilitate the understanding of the core difference between ResNet and DenseNet. We demonstrate that the core difference lies in whether the convolution parameters are shared for the preceding feature-maps. We proposed a dense weighted normalized shortcut as an alternative dense connection method, which outperforms the two existing dense connection techniques: identity shortcut in ResNet and dense concatenation in DenseNet. We found that Dense summation from the aggregation output provides superior performance to that from the convolutional block output. In short, the dense shortcut addresses the problem of representational capacity decrease in ResNet while avoiding the drawback of requiring more GPU resources in DenseNet. The proposed DSNet has been evaluated on multiple benchmark datasets to show superior performance than its counterpart ResNet. For example, on ImageNet, DenseNet50 can achieve better performance than the much deeper ResNet152. On ImageNet with few parameters and computation, it also achieves comparable performance as DenseNet. Moreover, it also achieves comparable performance as the recent Res2Net which can be further boosted by our dense shortcuts. Compared with other “free” performance boost module such as SE and CBAM, our dense shortcut also achieves more superior performance. The Grad-CAM result shows that DSNet, in general, focuses better on the object in the image than its counterpart ResNet.

References

  • [1] J. L. Ba, J. R. Kiros, and G. E. Hinton (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §2.1.
  • [2] J. Bai, B. Chen, Y. Li, D. Wu, W. Guo, S. Xia, and E. Yang (2020) Targeted attack for deep hashing based retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
  • [3] P. Benz, C. Zhang, T. Imtiaz, and I. S. Kweon (2020) Double targeted universal adversarial perturbations. arXiv preprint arXiv:2010.03288. Cited by: §1.
  • [4] P. Benz, C. Zhang, T. Imtiaz, and I. Kweon (2020) Data from model: extracting data from non-robust and robust models. arXiv preprint arXiv:2007.06196. Cited by: §1.
  • [5] P. Benz, C. Zhang, and I. S. Kweon (2020) Batch normalization increases adversarial vulnerability: disentangling usefulness and robustness of model features. arXiv preprint arXiv:2010.03316. Cited by: §2.1.
  • [6] Y. Chen, J. Li, H. Xiao, X. Jin, S. Yan, and J. Feng (2017) Dual path networks. In Advances in Neural Information Processing Systems, pp. 4467–4475. Cited by: §2.2, §3.1.
  • [7] N. Dalal and B. Triggs (2005) Histograms of oriented gradients for human detection. In CVPR, Cited by: §1.
  • [8] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) ImageNet: a large-scale hierarchical image database. In CVPR, Cited by: §1.
  • [9] Y. Feng, B. Chen, T. Dai, and S. Xia (2020)

    Adversarial attack on deep product quantization network for image retrieval

    .
    In

    AAAI Conference on Artificial Intelligence (AAAI)

    ,
    Cited by: §1.
  • [10] S. Gao, M. Cheng, K. Zhao, X. Zhang, M. Yang, and P. H. Torr (2019) Res2net: a new multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence. Cited by: §4.2, Table 7.
  • [11] R. Girshick, J. Donahue, T. Darrell, and J. Malik (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pp. 580–587. Cited by: §1.
  • [12] R. Girshick (2015) Fast R-CNN. In ICCV, Cited by: §1.
  • [13] I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), Cited by: §1.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §1, §2.1, §2.2, §3.3, §3.4, Table 1, Table 2, §4.1, §4.2, §4.4, Table 6, Table 7, Table 9.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun (2016) Identity mappings in deep residual networks. In ECCV, Cited by: §2.2, Table 6.
  • [16] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In CVPR, Cited by: §2.1, §4.2, Table 7.
  • [17] G. Huang, S. Liu, L. Van der Maaten, and K. Q. Weinberger (2018) Condensenet: an efficient densenet using learned group convolutions. In CVPR, pp. 2752–2761. Cited by: §2.2.
  • [18] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In CVPR, Cited by: §1, §1, §2.2, §3.1, §4.2, Table 6, Table 7, Table 9.
  • [19] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger (2016) Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Cited by: Table 6.
  • [20] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §2.1.
  • [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012)

    ImageNet classification with deep convolutional neural networks

    .
    In NIPS, Cited by: §1, §2.2.
  • [22] G. Larsson, M. Maire, and G. Shakhnarovich (2016) Fractalnet: ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648. Cited by: §1, §2.2, Table 6.
  • [23] C. Lee, S. Xie, P. Gallagher, Z. Zhang, and Z. Tu (2015) Deeply-supervised nets. In Artificial intelligence and statistics, pp. 562–570. Cited by: Table 6.
  • [24] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft COCO: common objects in context. In ECCV, Cited by: §1, §4.3.
  • [25] J. Long, E. Shelhamer, and T. Darrell (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1.
  • [26] D. G. Lowe (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2), pp. 91–110. Cited by: §1.
  • [27] A. Mahendran and A. Vedaldi (2015) Understanding deep image representations by inverting them. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 5188–5196. Cited by: §1.
  • [28] C. Olah, A. Mordvintsev, and L. Schubert (2017) Feature visualization. Distill 2 (11), pp. e7. Cited by: §1.
  • [29] G. Philipp, D. Song, and J. G. Carbonell (2018) Gradients explode-deep networks are shallow-resnet explained. In ICLR Workshop, Cited by: §1.
  • [30] G. Pleiss, D. Chen, G. Huang, T. Li, L. van der Maaten, and K. Q. Weinberger (2017) Memory-efficient implementation of densenets. arXiv preprint arXiv:1707.06990. Cited by: §1.
  • [31] S. Ren, K. He, R. Girshick, and J. Sun (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In NIPS, Cited by: §1, §4.3.
  • [32] T. Salimans and D. P. Kingma (2016) Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In NIPS, Cited by: §2.1.
  • [33] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: Figure 5, §4.4.
  • [34] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §1, §2.2, §3.1.
  • [35] J. T. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: Table 6.
  • [36] R. K. Srivastava, K. Greff, and J. Schmidhuber (2015) Highway networks. arXiv preprint arXiv:1505.00387. Cited by: Table 6.
  • [37] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In CVPR, Cited by: §1, §2.2.
  • [38] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In CVPR, Cited by: §1, §2.2.
  • [39] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
  • [40] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §2.1.
  • [41] W. Wang, X. Li, J. Yang, and T. Lu (2018) Mixed link networks. arXiv preprint arXiv:1802.01808. Cited by: §2.2, §3.1.
  • [42] S. Woo, J. Park, J. Lee, and I. S. Kweon (2018) CBAM: convolutional block attention module. In ECCV, Cited by: Table 7.
  • [43] Y. Wu and K. He (2018) Group normalization. In ECCV, Cited by: §2.1.
  • [44] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017) Aggregated residual transformations for deep neural networks. In CVPR, Cited by: §2.2, §3.2, Table 6.
  • [45] S. Zagoruyko and N. Komodakis (2016) Wide residual networks. In BMVC, Cited by: §1, §1, §2.2, §4.1, Table 6, Table 7.
  • [46] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §1.
  • [47] C. Zhang, P. Benz, T. Imtiaz, and I. Kweon (2020) CD-uap: class discriminative universal adversarial perturbation. In AAAI Conference on Artificial Intelligence (AAAI), Cited by: §1.
  • [48] C. Zhang, P. Benz, T. Imtiaz, and I. Kweon (2020) Understanding adversarial examples from the mutual influence of images and perturbations. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
  • [49] C. Zhang, F. Rameau, J. Kim, D. M. Argaw, J. Bazin, and I. S. Kweon (2020) DeepPTZ: deep self-calibration for ptz cameras. In Winter Conference on Applications of Computer Vision (WACV), Cited by: §1.
  • [50] C. Zhang, F. Rameau, S. Lee, J. Kim, P. Benz, D. M. Argaw, J. Bazin, and I. S. Kweon (2019) Revisiting residual networks with nonlinear shortcuts. In British Machine Vision Conference (BMVC), Cited by: §1, §1, §1, §2.1, Table 7.
  • [51] T. Zhang, G. Qi, B. Xiao, and J. Wang (2017) Interleaved group convolutions. In ICCV, pp. 4373–4382. Cited by: §2.1.