to network design engineering and numerous new network architectures have come out to boost the performance of DNNs. ResNet, reusing preceding features with the identity shortcut had large success and achieved state-of-the-art performance on several benchmark datasets, such as ImageNet
and COCO detection dataset. Compared to the Inception family of architectures, including GoogleNet  and Inception-V3 
, ResNet family shows better generalization, which implies that the learned features can be used in transfer learning with better efficiency.
One of the reasons that make ResNet exceptionally popular is the simple design strategy which introduces only one identity shortcut. Despite its large success, the weakness of the identity shortcut has been analyzed by follow-up works. The identity shortcut skips the residual blocks to preserve features, as a result limiting the representation power of the network [45, 50]. The drawback of ResNets is that it causes the collapsing domain problem which reduces the learning capacity of the network  and  proposed to mitigate it with non-linear shortcuts.
Another simple yet effective technique, dense concatenation, has been proposed in DenseNet  to facilitate training deep networks. The DenseNet adopting dense concatenation to all subsequent layers to avoid using direct summation, preserves the features in preceding layers. DenseNet has been shown to have better feature use efficiency, outperforming ResNet with fewer parameters . Nonetheless, DenseNet requires heavy GPU memory due to concatenation operations. The memory issue can be mitigated by memory-efficient implementation introduced in . However, such an implementation is more complex from the engineering perspective, and it also further increases the training time of DenseNet by . The main reason that DenseNet requires more training time is that DenseNet uses many small convolutions in the network, which runs slower on GPU than compact large convolution with the same number of GFLOPS. In short, there is a dilemma in the choice between ResNet and DenseNet for broad applications in terms of the performance and GPU resources.
This paper proposes a dense normalized shortcut as an alternative dense connection technique to mitigate this dilemma. The proposed dense normalized shortcut outperforms ResNet  with a significant margin, with negligible parameter overhead, and it achieves comparable performance as DenseNet but requiring less computation. Our proposed network structure adopts the same backbone (convolutional block design) as ResNet and replaces identity shortcut with our dense normalized shortcut. Our approach uses neither identity shortcut nor dense concatenation. From this perspective, this work is most similar to FractalNet  whose structural layouts are precisely fractal. To our best knowledge, FractalNet is the only work that explored to train deep networks using neither identity shortcut nor dense concatenation, however, its performance is less favourable. The non-linear shortcuts introduced in  leads to performance boost. However, without idenity shortcut, its performance decreases when the network is very deep.
Overall, this paper provides one unified perspective of dense summation to analyze ResNet and DenseNet, which facilitates a better understanding of their core differences. Based on this perspective, we propose dense weighted normalized shortcuts to alleviate the drawbacks of the existing two dense connection techniques. We evaluate the proposed DSNet on several benchmark datasets and the results show that it outperforms ResNet by a significant margin and also with fewer parameters achieves comparable (or slightly better) performance as DenseNet but requiring fewer computation resources.
2 Related works
Deep CNN network design has become a very hot research topic and numerous techniques contributed to the success of deep learning in the computer vision field. Those techniques can be roughly divided into two categories: micro-module design and macro-architecture design.
2.1 Micro-module design
Micro-modules, such as normalization modules , attention modules , group convolutions , and bottleneck design , can be inserted into existing macro-architecture networks to improve the performance. Among them, normalization techniques 
are the most widely used and by default almost all the deep learning models adopt batch normalization to improve the performance and speed up the convergence. The random sampling in batch normalization also contributes to improving the generalization capability of the model. Recently, it has been shown in  that batch normalization might increase adversarial vulnerability and updating the moving average statistics is found to improve the model corruption robustness. Alternative normalization techniques, such as weight normalization , instance normalization  or layer normalization , have been investigated for addressing the dependency between samples during training. Those alternative techniques can also speed up the convergence, however, often do not provide as good performance as batch normalization. Both instance normalization and layer normalization can be seen as a special case of the later proposed group normalization . Similar to instance normalization and layer normalization, group normalization performs the normalization along the channel direction instead of batch direction, enabling it to work effectively in memory-intensive applications where only a small number of samples can be processed in one batch . In previous works, the normalization techniques have been mainly used in the residual path but our work explores the effect of the normalization techniques in the dense shortcut path. Normalization in the non-linear shortcut has also been investigated in . We explore the normalization techniques in the proposed dense shortcut path mainly due to their lightweight property.
2.2 Macro-architecture design
On the other hand, network micro-architecture design aims to get backbone network structures that can be used to improve the performance across different tasks, for which the evaluation benchmark is the classification accuracy on ImageNet and CIFAR dataset, etc. Famous macro-architectures include AlexNet, VGGNet , GoogleNet and its variants [37, 38], ResNet , and DenseNet . Being the pioneering networks in the deep learning field, AlexNet, VGGNet and GoogleNet are still widely used by many researchers to design a prototype network for their applications. However, there is a trend in the research community that ResNet and DenseNet have become more favourable choices due to their competitive performance and simple designs. ResNet has two famous variants: WideResnet  and ResNext , which explore the dimensions of width and cardinality respectively. The original ResNet has demonstrated that identity shortcut can contribute to stabilizing the training of deep networks; however, the performance decreases when the network becomes extremely deep (for example, more than layers). Preactivation-ResNet  solved this problem by re-ordering activations in the ResNet module. The performance gain of preactivation-ResNet over original ResNet could only be observed in extremely deep networks . Later, it has been found that the extreme depth is unnecessary since it provides worse performance compared with the performance by increasing the width of the network with a similar number of parameters . ResNext further explored the influence of cardinality, which is more effective than increasing either depth or width. Despite some small differences between ResNet and its two variants, WideResnet and ResNext, one thing in common is that they all use identity shortcut. Instead, DenseNet effectively uses the concatenation technique without using the identity shortcut. Moreover, CondenseNet , a variant of DenseNet, exploits the power of dense connection in a more extreme way. One of its most significant differences from the DenseNet is that layers with different resolution feature-maps are also densely connected. Furthermore, Dual path network  and mixed link network  integrate ResNet and DenseNet into one network by using both identity shortcut and dense concatenation. Our work differentiates from previous works in that our approach adopts neither identity shortcut nor dense concatenation. FractalNet  also explored to train ultra-deep networks relying on neither identity shortcut nor dense concatenation; however, it provides less favourable performance. In the experiment section, we will show that our proposed approach outperforms both ResNet and DenseNet.
3 Proposed approach
3.1 Background: Dense connection exists in ResNet and DenseNet
What is the difference between ResNet and DenseNet? As the name suggests, it seems that the difference lies in that ResNet only uses one preceding feature-map, while DenseNet uses features of all the preceding convolutional blocks. The shared philosophy that unifies ResNet and DenseNet is that they both connect to the feature-maps of all preceding convolutional blocks . A similar finding has been revealed in [6, 41]. That is to say, dense connection exists in both ResNet and DenseNet [6, 41]. For a typical convolution in DNNs, we formulate it as:
where and indicate the current feature-map and previous feature-map respectively; “” indicates convolution operation and indicates the convolution weight. For simplicity, we do not take the bias term in the convolution into account. In VGG style network , is only the preceding feature-map. In DenseNet, however, connects the feature-maps of all preceding convolutional blocks as illustrated in Figure 1 (b):
where represents each of the preceding feature-maps and “” represents the operation of concatenation. Replacing with above , we get
from which it is obvious that DenseNet uses the feature-maps of all preceding convolutional block outputs. In ResNet
and it may seem that only reuses preceding feature-map . However, as shown in Figure 1 (a) we can recursively extend this function and get
Likewise, we insert the above to Eq. 1, for ResNet we get
from which it is clear that ResNet also connects to the feature-maps of all preceding convolutional blocks [18, 6, 41]. The difference between ResNet and DenseNet is that ResNet adopts summation to connect all preceding feature-maps while DenseNet concatenates all of them .
3.2 Unified perspective with dense summation
As analyzed above, DenseNet is different from ResNet because they adopt different dense connection methods: summation vs concatenation. Here, we demonstrate that dense concatenation before convolution operation can be equivalent to dense summation after convolution, thus Eq. 3 can be reformulated as:
where simply dividing one convolution weight into multiple small convolution weights , each having the same channel size as corresponding . Note that indicates the first convolution in the convolutional block instead of the whole convolution block. Thus, is the feature-map after the first convolution in the convolutional block. Overall, the above equivalence is illustrated in Figure 2, where and are the number of input channels and output channels respectively. Similar observation has been revealed in . For ResNet, we transform Eq. 6 into
From Eq. 9 we observe that both ResNet and DenseNet have dense summation of . This interesting observation provides a more unified perspective to perceive ResNet and DenseNet in terms of their resemblance to each other. However, the main purpose of formulating this “unified perspective” is to better understand their core differences for demonstrating their pros and cons. By comparing the two formulas for DenseNet and ResNet in Eq. 9, we find that the core difference lies in that the convolution weight in ResNet is the same for every preceding layer output while is different for in DenseNet. This core difference results in other differences in practical use. The input channel in increases with the increase of and it is normally larger than that in of ResNet when becomes large. Due to the concatenation feature, in is normally very small but with more layers. Thus, DenseNet in practical use often requires more training time. Moreover, the concatenation nature resulting in large input channel also requires more GPU memory. However, the merit of DenseNet design is that it exhibits more flexibility of using previous feature-maps because each is different.
It is worth mentioning that Eq. 9 does not reflect their practical implementation. is much faster than on GPUs; and is much more efficient than . Our above analysis only demonstrates the theoretical connection between ResNet and DenseNet.
3.3 Dense shortcut and DSNet
With the above unified perspective, the core difference between ResNet and DenseNet is revealed as whether the convolution parameters are shared for each preceding output. It then results in superior performance of DenseNet with the disadvantage of requiring more GPU resources. The difference originates from the adoption of different dense connection techniques, identity shortcut and dense concatenation. In this paper we propose one alternative dense connection that is motivated to alleviate their drawbacks. It introduces flexibility of using preceding feature-maps while still using the same for each preceding feature-map. Benchmarking ResNet formula in Eq. 9, we propose
which is equivalent to
where “” indicates dense shortcut. It refers to dense weighted normalized shortcut consisting of normalization and channel-wise weight. Specifically, , where represents channel-wise weight and and indicates normalization operation. In this work, the term “dense shortcut” is equivalent to “dense weighted normalized shortcut” unless otherwise specified. Eq. 11 is illustrated in Figure 3 (a); however, we conjecture that the feature-map contains more useful feature in the aggregation output than its corresponding single convolutional block output , thus by replacing with we propose another variant:
which is illustrated in Figure 3 (b). The conjecture that the feature of aggregation output is more meaningful than is supported by the experimental results (see Table 1). Therefore we mainly adopt variant (b) in Figure 3 in this study. We term the proposed network adopting DS shortcut DSNet. DSNet adopts the same network backbone (convolutional block itself and block design) as ResNet . The ResNet backbone is tailored for the identity shortcut, not for our proposed shortcut. It is conceivable that redesigning the backbone structure might further improve the performance of our DSNet, but this is beyond the scope of this work. The only difference between our DSNet and ResNet  is to replace identity shortcut with the proposed dense shortcut (ie., the dense weighted normalized shortcut).
For introducing dense shortcuts in ResNet, one naive approach is to just densely connect all preceding feature-maps by replacing single identity shortcut in Eq. 4 with dense identity shortcut, and we get
which can be recursively extended as:
Comparing it with Eq. 5, we find that dense identity shortcut is equivalent to add extra constant at the end of each convolutional block. Such design denoted ResNet50-dense in Table 1 does not achieve better performance than original ResNet50. This demonstrates the failure of naive dense identity shortcut. Next, we will illustrate our motivation for the design of our DS shortcut.
The motivation of using normalization is to normalize all the preceding features into a similar scale to avoid any preceding feature to dominate the whole summation and facilitate the training. Note that no affine transformation is applied in the normalization process. The weighted summation is to provide the network freedom to assign proper weight to each normalized feature-map depending on its significance. It is cumbersome to manually decide the weights for each one, thus these weights are set to learnable parameters. Moreover, we empirically find that inserting weighted normalized shortcut within the convolutional block on the 33 convolution can also contribute to the performance improvement, and since it adds almost little computation burden, it is worth including it. We term it DS2Net.
3.4 Ablation study
We adopt the widely used ResNet50 backbone to do ablation studies. The test is conducted on CIFAR100 and the results are available in Table 1. Note that the width of the network is set to 0.25 times the width of ResNet in  to save computation resources. We have two major observations from Table 1. First, normalization techniques are important to improve performance. Specifically, group normalization (GN) outperforms batch normalization (BN) by a visible margin. Instance normalization (IN) and layer normalization (LN) are two special cases of GN. LN performs slightly inferior to GN, and IN performs the worst. Second, the weighted parameters are also critical to improve performance. In particular, we empirically find that adopting channel-wise weight is crucial for the performance gain which indicates that different channels should have different weights. Overall, these two observations support the intuitions of adopting weighted normalized shortcuts. Other observations include the superiority of DS2Net to DSNet and the inferiority of DSNet-a to DSNet.
|ResNet50 (0.25) ||26.13|
|DSNet50-a (GN + weight) (0.25)||25.23|
|DSNet50 (GN + weight) (0.25)||24.12|
|DS2Net50 (GN + weight) (0.25)||23.72|
|DSNet50 (LN + weight) (0.2)||24.43|
|DSNet50 (BN + weight) (0.25)||24.90|
|DSNet50 (IN + weight) (0.25)||28.11|
|DSNet50 (None + weight) (0.25)||26.26|
|DSNet50 (GN + no weight) (0.25)||25.85|
|ResNet50 (1) ||21.43||4.82|
4 Experimental Results and Analysis
In this section, we conduct experiments across a range of datasets to evaluate the proposed dense shortcut.
4.1 CIFAR experiments
We first evaluate it on CIFAR datasets. Both CIFAR-100 and CIFAR-10 contain 60,000 colour images with the resolution of 32
32. Totally 50,000 images are used as the training dataset and the remaining 10,000 images as the validation dataset. For the data augmentation, we adopt widely adopted cropping with 4-pixel padding and horizontal flipping. The preprocessing is done with the normalization through the training dataset mean and standard deviations values. For all CIFAR experiments, by default, we set the weight decay to 0.0005 and train for 64k iterations and the initial learning rate is set to 0.1 which is then divided by 10 at 32k and 48k iterations respectively similar to. Since we adopt the same backbone as ResNet and replace the identity shortcut with our dense shortcut, we mainly compare our results with the original ResNets .
The results in Table 2 show that our proposed approach outperforms ResNet by a significant margin. On CIFAR-100, DSNet50 (0.25) outperforms ResNet50 (0.25) by a margin of 2%. DS2Net further improves the performance of DSNet by a visible margin. The same trend can be observed for both wider () and deeper networks (depth 101). CIFAR-10 mirrors the story and for simplicity in the remainder of the paper, we only report the result for CIFAR-100. To further evaluate the robustness of the proposed DSNet to different depths and widths, we conduct extra experiments and the results are available in Table 3 and Table 4 respectively.
|26||[2, 2, 2, 2]||27.74||26.80||26.30|
|38||[3, 3, 3, 3]||27.26||25.59||25.30|
|50||[3, 4, 6, 3]||26.13||24.12||23.72|
|77||[3, 4, 15, 3]||25.32||23.95||23.55|
|101||[3, 4, 23, 3]||25.01||23.63||23.40|
For the depth exploration, we set the width to and for the width exploration, we set the depth to 50 layers. We observe that DSNet consistently outperforms ResNet over a wide range of depth and widths. Since it has been shown by  that increasing the width of the network is more effective than increasing the depth to improve the performance, in the following exploration, we always choose the depth to be 50 layers. Furthermore, we evaluate the proposed dense shortcut on one famous variant of Resnet: ResNext. The results are available in Table 5. The results show that a similar trend has been observed as the original ResNet. Note that ResNext has a similar number of parameters as ResNet, while wide ResNet has almost three times more parameters and GFLOPS. Since the wide ResNets essentially adopts the same structure as ResNet and the small difference is only doubling the Conv 33 feature-maps, to avoid redundancy we only report one case in Table 4 and do not perform more experiments on this structure. Even though ResNeXt is a well-optimized structure, our proposed approach can further improve its performance with a significant margin. Interestingly, DS2NeXt (0.5) can even outperform ResNeXt (1) with a margin of 0.66%. This is quite surprising because ResNeXt (1) has around four times more parameters and GFLOPS than DS2NeXt (0.5). Note that the number of parameters and GFLOPS increase linearly with the increase of depth but quadratically with the increase of width. We further compare our performance on CIFAR-100 with the performance reported in previous works. WRN-28-10 does not adopt the bottleneck structure, thus DS2WRN-28-10 is not applicable (the shortcut within the Conv block can only be inserted into the Conv 33 in the bottleneck for the dimension to match). From Table 6 we observe that the networks that do not use dense connection perform much worse than those adopting dense connection. FractalNet adopting neither identity shortcut nor dense concatenation but with careful engineering design performs much better than other networks without dense connection; however, the performance is still not comparable to ResNet variants or DenseNets. Similar to FractalNet our proposed DSNet adopts neither identity shortcut nor dense concatenation, but it outperforms both ResNets (including WRN and ResNeXt) and DenseNets. The results show that dense weight normalized shortcuts constitute a competitive dense connection technique.
|Deeply supervised Net ||-||34.57|
|HighWay Network ||-||32.39|
|with dropout ||38.6.M||23.73|
|ResNet with stochastic depth ||1.7M||24.58|
|Preacitvation ResNet ||10.2M||22.71|
|DenseNet( = 24) ||27.2M||19.25|
|DenseNet-BC ( = 24) ||15.3M||17.60|
|DenseNet-BC ( = 40) ||25.6M||17.18|
|with dropout ||36.5M||18.85|
|ResNeXt-29, 864 ||34.4M||17.77|
|ResNeXt-29, 1664 ||68.1M||17.31|
4.2 ImageNet experiments
ImageNet is the benchmark dataset for classification tasks to evaluate and compare different approaches. Our implementation details follow ResNet . Specifically, we adopt the commonly used random cropping with scale and aspect ratio augmentation for training and adopt SGD as the optimizer. For typical training on ImageNet, 8 GPUs are used and the batch size is set to 256 
. We used 4 GPUs to train the proposed DSNet and the batch size is set to 128 (32 per GPU). Accordingly, taking linear scaling rule into account, we set the initial learning rate to 0.05 instead of the commonly used 0.1. We train the network for 100 epochs and the learning rate is divided by 10 after every 30 epochs. We report the single-crop classification errors on the validation dataset with the input image size of. Both top-1 and top-5 errors are reported and the results are available in Table 7.
|Architecture||Params||Top-1 (%)||Top-5 (%)|
|ResNeXt-50, 324 ||25M||22.2||-|
We note that the proposed DS2Net50 can achieve significantly better performance than the original ResNet50. Somewhat surprisingly, DS2Net50 can outperform SE-Net (which won the first place in ILSVRC 2017 challenge) as well as CBAM with improved attention module by a relatively large margin. DS2Net50 can achieve equivalent (if not better) performance with that of much deeper ResNet152. Compared with WRN-50-2-bottleneck, DS2Net achieves slightly worse result with the top-1 error metric and slightly better performance with the top-5 error metric. Note that WRN-50-2 has almost three times more parameters and GFLOPS than DS2Net. DS2Net-50 achieves slightly better performance than ResNeXt-50, 324 and Res-RGSNet50. It is claimed by  that DSNet201 can achieve equivalent performance with ResNet101 which has twice more parameters and computation GFLOPS. Even though DS2Net with a smaller number of parameters or GFLOPS outperforms DenseNet264, we do not aim to argue it as the main merit of DS2Net over DenseNet. Instead, the main advantage of DS2Net over DenseNet is that it is more time-efficient in practice with GPU implementation. In short, DSNet shows equivalent or better performance with DenseNet but it avoids the drawback of DenseNet. Our proposed DS2Net-50 also achieves comparable performance as the recent Rese2Net-50 . By applying the dense shortcuts to Res2Net, we further improve the performance by 0.40% margin, which is not trivial considering Res2Net is already a very well designed architecture. Compared with ResNet50, DS2Net50 improves the performance of ResNet50 by a large margin. The number of the added weighted parameters is less than 0.15% of the original number of parameters and only a small computation overhead is added because it still adopts the same backbone structure as the original ResNet and only light operations are needed for the added shortcut.
The training curve is shown in Figure 4. We observe that the proposed dense weighted normalized shortcut is also beneficial to speed up the convergence. It is also important to note that the training error of DS2Net is much smaller.
4.3 COCO detection dataset experiments
To evaluate the dataset generalization capability of the proposed DSNet, we further evaluate it on MS COCO 2014 detection dataset . Faster-RCNN  is chosen as the detection method. The network is pre-trained on ImageNet and then finetuned on the COCO dataset with 5 epochs for fast performance validation. The results are available in Table 8. On COCO detection dataset, our proposed DSNet achieves better performance than ResNet50.
|Backbone||mAP.5||mAP.75||mAP [.5, .95]|
4.4 Visualization with Grad-CAM
We apply the widely used Grad-CAM  to ResNet50  and our proposed DSNet/DS2Net, on the images from the ImageNet validation set. The visualization results are available in Figure 5. Grad-CAM calculates the gradients concerning a certain class, thus Grad-CAM result shows the attended regions in the image. We observe that the DSNet attends more on the objects and have a sharper focus (i.e. attention) than ResNet50. No obvious difference is observed between DSNet50 and DS2Net50. More qualitative results are available in the supplementary material.
4.5 Implementation design and memory/speed test
The straightforward implementation of dense normalized shortcut is to do normalization with affine transformation in every shortcut. This will cause unnecessary computation and slightly more parameter overhead burden. First, the normalization process is shared for a certain preceding layer feature. Second, the affine transformation by default has both scale and bias, while the summation of dense bias is mathematically redundant. In our implementation, for every aggregation output , we only perform the normalization process once which is then shared by all the dense shortcuts linked to it. Thus, for the dense shortcut path, the operation is only to multiply the shared normalized feature by the corresponding weight, which is very light. This implementation choice does not influence the performance but requires less computation time.
|Architecture||Top-1 (%)||memory (MB)||time (s)|
With the above implementation, we measure the ImageNet training memory and speed on the same machine (equipped with four 1080Ti GPUs) with the hyperparameters specified above. We report consumed memory per GPU and computation time per iteration in Table9. DS2Net (with a smaller number of parameters as shown in Table 7) performs relatively better than ResNet152 and Dense264 with the advantage for both memory and speed. Compared with ResNet50, the increase of memory and computation for DSNet/DS2Net is marginal.
We provide a unified perspective of dense summation to facilitate the understanding of the core difference between ResNet and DenseNet. We demonstrate that the core difference lies in whether the convolution parameters are shared for the preceding feature-maps. We proposed a dense weighted normalized shortcut as an alternative dense connection method, which outperforms the two existing dense connection techniques: identity shortcut in ResNet and dense concatenation in DenseNet. We found that Dense summation from the aggregation output provides superior performance to that from the convolutional block output. In short, the dense shortcut addresses the problem of representational capacity decrease in ResNet while avoiding the drawback of requiring more GPU resources in DenseNet. The proposed DSNet has been evaluated on multiple benchmark datasets to show superior performance than its counterpart ResNet. For example, on ImageNet, DenseNet50 can achieve better performance than the much deeper ResNet152. On ImageNet with few parameters and computation, it also achieves comparable performance as DenseNet. Moreover, it also achieves comparable performance as the recent Res2Net which can be further boosted by our dense shortcuts. Compared with other “free” performance boost module such as SE and CBAM, our dense shortcut also achieves more superior performance. The Grad-CAM result shows that DSNet, in general, focuses better on the object in the image than its counterpart ResNet.
-  (2016) Layer normalization. arXiv preprint arXiv:1607.06450. Cited by: §2.1.
-  (2020) Targeted attack for deep hashing based retrieval. In Proceedings of the European Conference on Computer Vision (ECCV), Cited by: §1.
-  (2020) Double targeted universal adversarial perturbations. arXiv preprint arXiv:2010.03288. Cited by: §1.
-  (2020) Data from model: extracting data from non-robust and robust models. arXiv preprint arXiv:2007.06196. Cited by: §1.
-  (2020) Batch normalization increases adversarial vulnerability: disentangling usefulness and robustness of model features. arXiv preprint arXiv:2010.03316. Cited by: §2.1.
-  (2017) Dual path networks. In Advances in Neural Information Processing Systems, pp. 4467–4475. Cited by: §2.2, §3.1.
-  (2005) Histograms of oriented gradients for human detection. In CVPR, Cited by: §1.
-  (2009) ImageNet: a large-scale hierarchical image database. In CVPR, Cited by: §1.
Adversarial attack on deep product quantization network for image retrieval. In
AAAI Conference on Artificial Intelligence (AAAI), Cited by: §1.
-  (2019) Res2net: a new multi-scale backbone architecture. IEEE transactions on pattern analysis and machine intelligence. Cited by: §4.2, Table 7.
-  (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, pp. 580–587. Cited by: §1.
-  (2015) Fast R-CNN. In ICCV, Cited by: §1.
-  (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), Cited by: §1.
-  (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §1, §2.1, §2.2, §3.3, §3.4, Table 1, Table 2, §4.1, §4.2, §4.4, Table 6, Table 7, Table 9.
-  (2016) Identity mappings in deep residual networks. In ECCV, Cited by: §2.2, Table 6.
-  (2018) Squeeze-and-excitation networks. In CVPR, Cited by: §2.1, §4.2, Table 7.
-  (2018) Condensenet: an efficient densenet using learned group convolutions. In CVPR, pp. 2752–2761. Cited by: §2.2.
-  (2017) Densely connected convolutional networks. In CVPR, Cited by: §1, §1, §2.2, §3.1, §4.2, Table 6, Table 7, Table 9.
-  (2016) Deep networks with stochastic depth. In European conference on computer vision, pp. 646–661. Cited by: Table 6.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §2.1.
ImageNet classification with deep convolutional neural networks. In NIPS, Cited by: §1, §2.2.
-  (2016) Fractalnet: ultra-deep neural networks without residuals. arXiv preprint arXiv:1605.07648. Cited by: §1, §2.2, Table 6.
-  (2015) Deeply-supervised nets. In Artificial intelligence and statistics, pp. 562–570. Cited by: Table 6.
-  (2014) Microsoft COCO: common objects in context. In ECCV, Cited by: §1, §4.3.
-  (2015) Fully convolutional networks for semantic segmentation. In CVPR, Cited by: §1.
-  (2004) Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2), pp. 91–110. Cited by: §1.
Understanding deep image representations by inverting them.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188–5196. Cited by: §1.
-  (2017) Feature visualization. Distill 2 (11), pp. e7. Cited by: §1.
-  (2018) Gradients explode-deep networks are shallow-resnet explained. In ICLR Workshop, Cited by: §1.
-  (2017) Memory-efficient implementation of densenets. arXiv preprint arXiv:1707.06990. Cited by: §1.
-  (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In NIPS, Cited by: §1, §4.3.
-  (2016) Weight normalization: a simple reparameterization to accelerate training of deep neural networks. In NIPS, Cited by: §2.1.
-  (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626. Cited by: Figure 5, §4.4.
-  (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §1, §2.2, §3.1.
-  (2014) Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806. Cited by: Table 6.
-  (2015) Highway networks. arXiv preprint arXiv:1505.00387. Cited by: Table 6.
-  (2015) Going deeper with convolutions. In CVPR, Cited by: §1, §2.2.
-  (2016) Rethinking the inception architecture for computer vision. In CVPR, Cited by: §1, §2.2.
-  (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
-  (2016) Instance normalization: the missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. Cited by: §2.1.
-  (2018) Mixed link networks. arXiv preprint arXiv:1802.01808. Cited by: §2.2, §3.1.
-  (2018) CBAM: convolutional block attention module. In ECCV, Cited by: Table 7.
-  (2018) Group normalization. In ECCV, Cited by: §2.1.
-  (2017) Aggregated residual transformations for deep neural networks. In CVPR, Cited by: §2.2, §3.2, Table 6.
-  (2016) Wide residual networks. In BMVC, Cited by: §1, §1, §2.2, §4.1, Table 6, Table 7.
-  (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §1.
-  (2020) CD-uap: class discriminative universal adversarial perturbation. In AAAI Conference on Artificial Intelligence (AAAI), Cited by: §1.
-  (2020) Understanding adversarial examples from the mutual influence of images and perturbations. In Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §1.
-  (2020) DeepPTZ: deep self-calibration for ptz cameras. In Winter Conference on Applications of Computer Vision (WACV), Cited by: §1.
-  (2019) Revisiting residual networks with nonlinear shortcuts. In British Machine Vision Conference (BMVC), Cited by: §1, §1, §1, §2.1, Table 7.
-  (2017) Interleaved group convolutions. In ICCV, pp. 4373–4382. Cited by: §2.1.