wideresidualnetworks
3.8% and 18.3% on CIFAR10 and CIFAR100
view repo
Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16layerdeep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousandlayerdeep networks, achieving new stateoftheart results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https://github.com/szagoruyko/wideresidualnetworks
READ FULL TEXT VIEW PDF
Residual networks (ResNets) represent a powerful type of convolutional n...
read it
Deep Residual Networks have reached the state of the art in many image
p...
read it
A residualnetworks family with hundreds or even thousands of layers
dom...
read it
Deep convolutional neural networks (DCNNs) have shown remarkable perform...
read it
The trend towards increasingly deep neural networks has been driven by a...
read it
In this article, we take one step toward understanding the learning beha...
read it
The Wide Residual Networks (WideResNets), a shallow but wide model vari...
read it
3.8% and 18.3% on CIFAR10 and CIFAR100
Wide Residual Networks (WideResNets) in PyTorch
Wide Residual Networks implemented in TensorLayer and TensorFlow.
training wide residual networks in caffe
ShaResNet: reducing residual network parameter number by sharing weights
Convolutional neural networks have seen a gradual increase of the number of layers in the last few years, starting from AlexNet [Krizhevsky et al.(2012a)Krizhevsky, Sutskever, and Hinton], VGG [Simonyan and Zisserman(2015)], Inception [Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich] to Residual [He et al.(2015a)He, Zhang, Ren, and Sun] networks, corresponding to improvements in many image recognition tasks. The superiority of deep networks has been spotted in several works in the recent years [Bianchini and Scarselli(2014), Montúfar et al.(2014)Montúfar, Pascanu, Cho, and Bengio]. However, training deep neural networks has several difficulties, including exploding/vanishing gradients and degradation. Various techniques were suggested to enable training of deeper neural networks, such as welldesigned initialization strategies [Bengio and Glorot(2010), He et al.(2015b)He, Zhang, Ren, and Sun], better optimizers [Sutskever et al.(2013)Sutskever, Martens, Dahl, and Hinton], skip connections [Lee et al.(2014)Lee, Xie, Gallagher, Zhang, and Tu, Raiko et al.(2012)Raiko, Valpola, and Lecun], knowledge transfer [Romero et al.(2014)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio, Chen et al.(2016)Chen, Goodfellow, and Shlens] and layerwise training [Schmidhuber(1992)].
The latest residual networks [He et al.(2015a)He, Zhang, Ren, and Sun]
had a large success winning ImageNet and COCO 2015 competition and achieving stateoftheart in several benchmarks, including object classification on ImageNet and CIFAR, object detection and segmentation on PASCAL VOC and MS COCO. Compared to Inception architectures they show better generalization, meaning the features can be utilized in transfer learning with better efficiency. Also, followup work showed that residual links speed up convergence of deep networks
[Szegedy et al.(2016)Szegedy, Ioffe, and Vanhoucke]. Recent followup work explored the order of activations in residual networks, presenting identity mappings in residual blocks [He et al.(2016)He, Zhang, Ren, and Sun] and improving training of very deep networks. Successful training of very deep networks was also shown to be possible through the use of highway networks [Srivastava et al.(2015)Srivastava, Greff, and Schmidhuber], which is an architecture that had been proposed prior to residual networks. The essential difference between residual and highway networks is that in the latter residual links are gated and weights of these gates are learned.Therefore, up to this point, the study of residual networks has focused mainly on the order of activations inside a ResNet block and the depth of residual networks. In this work we attempt to conduct an experimental study that goes beyond the above points. By doing so, our goal is to explore a much richer set of network architectures of ResNet blocks and thoroughly examine how several other different aspects besides the order of activations affect performance. As we explain below, such an exploration of architectures has led to new interesting findings with great practical importance concerning residual networks.
Width vs depth in residual networks
. The problem of shallow vs deep networks has been in discussion for a long time in machine learning
[Larochelle et al.(2007)Larochelle, Erhan, Courville, Bergstra, and Bengio, Bengio and LeCun(2007)] with pointers to the circuit complexity theory literature showing that shallow circuits can require exponentially more components than deeper circuits. The authors of residual networks tried to make them as thin as possible in favor of increasing their depth and having less parameters, and even introduced a <<bottleneck>> block which makes ResNet blocks even thinner.Various residual blocks used in the paper. Batch normalization and ReLU precede each convolution (omitted for clarity)
We note, however, that the residual block with identity mapping that allows to train very deep networks is at the same time a weakness of residual networks. As gradient flows through the network there is nothing to force it to go through residual block weights and it can avoid learning anything during training, so it is possible that there is either only a few blocks that learn useful representations, or many blocks share very little information with small contribution to the final goal. This problem was formulated as diminishing feature reuse in [Srivastava et al.(2015)Srivastava, Greff, and Schmidhuber]. The authors of [Huang et al.(2016)Huang, Sun, Liu, Sedra, and Weinberger] tried to address this problem with the idea of randomly disabling residual blocks during training. This method can be viewed as a special case of dropout [Srivastava et al.(2014)Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov], where each residual block has an identity scalar weight on which dropout is applied. The effectiveness of this approach proves the hypothesis above.
Motivated by the above observation, our work builds on top of [He et al.(2016)He, Zhang, Ren, and Sun] and tries to answer the question of how wide deep residual networks should be and address the problem of training. In this context, we show that the widening of ResNet blocks (if done properly) provides a much more effective way of improving performance of residual networks compared to increasing their depth. In particular, we present wider deep residual networks that significantly improve over [He et al.(2016)He, Zhang, Ren, and Sun], having 50 times less layers and being more than 2 times faster. We call the resulting network architectures wide residual networks. For instance, our wide 16layer deep network has the same accuracy as a 1000layer thin deep network and a comparable number of parameters, although being several times faster to train. This type of experiments thus seem to indicate that the main power of deep residual networks is in residual blocks, and that the effect of depth is supplementary. We note that one can train even better wide residual networks that have twice as many parameters (and more), which suggests that to further improve performance by increasing depth of thin networks one needs to add thousands of layers in this case.
Use of dropout in ResNet blocks. Dropout was first introduced in [Srivastava et al.(2014)Srivastava, Hinton, Krizhevsky, Sutskever, and Salakhutdinov] and then was adopted by many successful architectures as [Krizhevsky et al.(2012a)Krizhevsky, Sutskever, and Hinton, Simonyan and Zisserman(2015)] etc. It was mostly applied on top layers that had a large number of parameters to prevent feature coadaptation and overfitting. It was then mainly substituted by batch normalization [Ioffe and Szegedy(2015)] which was introduced as a technique to reduce internal covariate shift in neural network activations by normalizing them to have specific distribution. It also works as a regularizer and the authors experimentally showed that a network with batch normalization achieves better accuracy than a network with dropout. In our case, as widening of residual blocks results in an increase of the number of parameters, we studied the effect of dropout to regularize training and prevent overfitting. Previously, dropout in residual networks was studied in [He et al.(2016)He, Zhang, Ren, and Sun] with dropout being inserted in the identity part of the block, and the authors showed negative effects of that. Instead, we argue here that dropout should be inserted between convolutional layers. Experimental results on wide residual networks show that this leads to consistent gains, yielding even new stateoftheart results (e.g, 16layerdeep wide residual network with dropout achieves 1.64% error on SVHN).
In summary, the contributions of this work are as follows:
We present a detailed experimental study of residual network architectures that thoroughly examines several important aspects of ResNet block structure.
We propose a novel widened architecture for ResNet blocks that allows for residual networks with significantly improved performance.
We propose a new way of utilizing dropout within deep residual networks so as to properly regularize them and prevent overfitting during training.
Last, we show that our proposed ResNet architectures achieve stateoftheart results on several datasets dramatically improving accuracy and speed of residual networks.
Residual block with identity mapping can be represented by the following formula:
(1) 
where and are input and output of the th unit in the network, is a residual function and are parameters of the block. Residual network consists of sequentially stacked residual blocks.
In [He et al.(2016)He, Zhang, Ren, and Sun] residual networks consisted of two type of blocks:
Compared to the original architecture [He et al.(2015a)He, Zhang, Ren, and Sun] in [He et al.(2016)He, Zhang, Ren, and Sun] the order of batch normalization, activation and convolution in residual block was changed from convBNReLU to BNReLUconv. As the latter was shown to train faster and achieve better results we don’t consider the original version. Furthermore, socalled <<bottleneck>> blocks were initially used to make blocks less computationally expensive to increase the number of layers. As we want to study the effect of widening and <<bottleneck>> is used to make networks thinner we don’t consider it too, focusing instead on <<basic>> residual architecture.
There are essentially three simple ways to increase representational power of residual blocks:
to add more convolutional layers per block
to widen the convolutional layers by adding more feature planes
to increase filter sizes in convolutional layers
As small filters were shown to be very effective in several works including [Simonyan and Zisserman(2015), Szegedy et al.(2016)Szegedy, Ioffe, and Vanhoucke] we do not consider using filters larger than . Let us also introduce two factors, deepening factor and widening factor , where is the number of convolutions in a block and multiplies the number of features in convolutional layers, thus the baseline <<basic>> block corresponds to , . Figures 1(a) and 1(c) show schematic examples of <<basic>> and <<basicwide>> blocks respectively.
group name  output size  block type = 


conv1  [33, 16]  
conv2  3232  N 
conv3  1616  N 
conv4  88  N 
avgpool  [] 
The general structure of our residual networks is illustrated in table 1: it consists of an initial convolutional layer conv1 that is followed by 3 groups (each of size ) of residual blocks conv2, conv3 and conv4, followed by average pooling and final classification layer. The size of conv1 is fixed in all of our experiments, while the introduced widening factor scales the width of the residual blocks in the three groups conv24 (e.g, the original <<basic>> architecture is equivalent to ). We want to study the effect of representational power of residual block and, to that end, we perform and test several modifications to the <<basic>> architecture, which are detailed in the following subsections.
Let denote residual block structure, where is a list with the kernel sizes of the convolutional layers in a block. For example, denotes a residual block with and convolutional layers (we always assume square spatial kernels). Note that, as we do not consider <<bottleneck>> blocks as explained earlier, the number of feature planes is always kept the same across the block. We would like to answer the question of how important each of the convolutional layers of the <<basic>> residual architecture is and if they can be substituted by a less computationally expensive layer or even a combination of and convolutional layers, e.g, or . This can increase or decrease the representational power of the block. We thus experiment with the following combinations (note that the last combination, i.e., is similar to effective NetworkinNetwork [Lin et al.(2013)Lin, Chen, and Yan] architecture):
 original <<basic>> block
 with one extra layer
 with the same dimensionality of all convolutions, <<straightened>> bottleneck
 the network has alternating  convolutions everywhere
 similar idea to the previous block
 NetworkinNetwork style block
We also experiment with the block deepening factor to see how it affects performance. The comparison has to be done among networks with the same number of parameters, so in this case we need to build networks with different and (where denotes the total number of blocks) while ensuring that network complexity is kept roughly constant. This means, for instance, that should decrease whenever increases.
In addition to the above modifications, we experiment with the widening factor of a block. While the number of parameters increases linearly with (the deepening factor) and (the number of ResNet blocks), number of parameters and computational complexity are quadratic in
. However, it is more computationally effective to widen the layers than have thousands of small kernels as GPU is much more efficient in parallel computations on large tensors, so we are interested in an optimal
to ratio.One argument for wider residual networks would be that almost all architectures before residual networks, including the most successful Inception [Szegedy et al.(2015)Szegedy, Liu, Jia, Sermanet, Reed, Anguelov, Erhan, Vanhoucke, and Rabinovich] and VGG [Simonyan and Zisserman(2015)], were much wider compared to [He et al.(2016)He, Zhang, Ren, and Sun]. For example, residual networks WRN228 and WRN1610 (see next paragraph for explanation of this notation) are very similar in width, depth and number of parameters to VGG architectures.
We further refer to original residual networks with as <<thin>> and to networks with as <<wide>>. In the rest of the paper we use the following notation: WRN denotes a residual network that has a total number of convolutional layers and a widening factor (for example, network with 40 layers and times wider than original would be denoted as WRN402). Also, when applicable we append block type, e.gWRN402.
As widening increases the number of parameters we would like to study ways of regularization. Residual networks already have batch normalization that provides a regularization effect, however it requires heavy data augmentation, which we would like to avoid, and it’s not always possible. We add a dropout layer into each residual block between convolutions as shown in fig. 1(d) and after ReLU to perturb batch normalization in the next residual block and prevent it from overfitting. In very deep residual networks that should help deal with diminishing feature reuse problem enforcing learning in different residual blocks.
For experiments we chose wellknown CIFAR10, CIFAR100, SVHN and ImageNet image classification datasets. CIFAR10 and CIFAR100 datasets [Krizhevsky et al.(2012b)Krizhevsky, Nair, and Hinton] consist of
color images drawn from 10 and 100 classes split into 50,000 train and 10,000 test images. For data augmentation we do horizontal flips and take random crops from image padded by 4 pixels on each side, filling missing pixels with reflections of original image. We don’t use heavy data augmentation as proposed in
[Graham(2014)]. SVHN is a dataset of Google’s Street View House Numbers images and contains about 600,000 digit images, coming from a significantly harder real world problem. For experiments on SVHN we don’t do any image preprocessing, except dividing images by 255 to provide them in [0,1] range as input. All of our experiments except ImageNet are based on [He et al.(2016)He, Zhang, Ren, and Sun] architecture with preactivation residual blocks and we use it as baseline. For ImageNet, we find that using preactivation in networks with less than 100 layers does not make any significant difference and so we decide to use the original ResNet architecture in this case. Unless mentioned otherwise, for CIFAR we follow the image preprocessing of [Goodfellow et al.(2013)Goodfellow, WardeFarley, Mirza, Courville, and Bengio] with ZCA whitening. However, for some CIFAR experiments we instead use simple mean/std normalization such that we can directly compare with [He et al.(2016)He, Zhang, Ren, and Sun] and other ResNet related works that make use of this type of preprocessing.In the following we describe our findings w.r.t. the different ResNet block architectures and also analyze the performance of our proposed wide residual networks. We note that for all experiments related to <<type of convolutions in a block>> and <<number of convolutions per block>> we use and reduced depth compared to [He et al.(2016)He, Zhang, Ren, and Sun] in order to speed up training.
block type depth # params time,s CIFAR10 40 1.4M 85.8 6.06 40 1.2M 67.5 5.78 40 1.3M 72.2 6.42 40 1.3M 82.2 5.86 28 1.5M 67.5 5.73 22 1.1M 59.9 5.78 CIFAR10 1 6.69 2 5.43 3 5.65 4 5.93 
and different block types. Time column measures one training epoch.
We start by reporting results using trained networks with different block types (reported results are on CIFAR10). We used WRN402 for blocks , , and as these blocks have only one convolution. To keep the number of parameters comparable we trained other networks with less layers: WRN282 and WRN222. We provide the results including test accuracy in median over 5 runs and time per training epoch in the table 3. Block turned out to be the best by a little margin, and with are very close to in accuracy having less parameters and less layers. is faster than others by a small margin.
Based on the above, blocks with comparable number of parameters turned out to give more or less the same results. Due to this fact, we hereafter restrict our attention to only WRNs with convolutions so as to be also consistent with other methods.
We next proceed with the experiments related to varying the deepening factor (which represents the number of convolutional layers per block). We show indicative results in table 3, where in this case we took WRN402 with convolutions and trained several networks with different deepening factor , same number of parameters (2.2) and same number of convolutional layers.
As can be noticed, turned out to be the best, whereas and
had the worst performance. We speculate that this is probably due to the increased difficulty in optimization as a result of the decreased number of residual connections in the last two cases. Furthermore,
turned out to be quite worse. The conclusion is that is optimal in terms of number of convolutions per block. For this reason, in the remaining experiments we only consider wide residual networks with a block of type .As we try to increase widening parameter we have to decrease total number of layers. To find an optimal ratio we experimented with from 2 to 12 and depth from 16 to 40. The results are presented in table 4. As can be seen, all networks with 40, 22 and 16 layers see consistent gains when width is increased by 1 to 12 times. On the other hand, when keeping the same fixed widening factor or and varying depth from 16 to 28 there is a consistent improvement, however when we further increase depth to 40 accuracy decreases (e.g, WRN408 loses in accuracy to WRN228).
We show additional results in table 5 where we compare thin and wide residual networks. As can be observed, wide WRN404 compares favorably to thin ResNet1001 as it achieves better accuracy on both CIFAR10 and CIFAR100. Yet, it is interesting that these networks have comparable number of parameters, 8.9 and 10.2, suggesting that depth does not add regularization effects compared to width at this level. As we show further in benchmarks, WRN404 is 8 times faster to train, so evidently depth to width ratio in the original thin residual networks is far from optimal.
depth  # params  CIFAR10  CIFAR100  


40  1  0.6M  6.85  30.89 
40  2  2.2M  5.33  26.04 
40  4  8.9M  4.97  22.89 
40  8  35.7M  4.66   
28  10  36.5M  4.17  20.50 
28  12  52.5M  4.33  20.43 
22  8  17.2M  4.38  21.22 
22  10  26.8M  4.44  20.75 
16  8  11.0M  4.81  22.07 
16  10  17.1M  4.56  21.59 
Also, wide WRN2810 outperforms thin ResNet1001 by 0.92% (with the same minibatch size during training) on CIFAR10 and 3.46% on CIFAR100, having 36 times less layers (see table 5). We note that the result of 4.64% with ResNet1001 was obtained with batch size 64, whereas we use a batch size 128 in all of our experiments (i.e., all other results reported in table 5 are with batch size 128). Training curves for these networks are presented in Figure 2.
Despite previous arguments that depth gives regularization effects and width causes network to overfit, we successfully train networks with several times more parameters than ResNet1001. For instance, wide WRN2810 (table 5) and wide WRN4010 (table 9) have respectively and times more parameters than ResNet1001 and both outperform it by a significant margin.
depth  # params  CIFAR10  CIFAR100  


NIN [Lin et al.(2013)Lin, Chen, and Yan]  8.81  35.67  
DSN [Lee et al.(2014)Lee, Xie, Gallagher, Zhang, and Tu]  8.22  34.57  
FitNet [Romero et al.(2014)Romero, Ballas, Ebrahimi Kahou, Chassang, Gatta, and Bengio]  8.39  35.04  
Highway [Srivastava et al.(2015)Srivastava, Greff, and Schmidhuber]  7.72  32.39  
ELU [Clevert et al.(2015)Clevert, Unterthiner, and Hochreiter]  6.55  24.28  
originalResNet[He et al.(2015a)He, Zhang, Ren, and Sun]  110  1.7M  6.43  25.16 
1202  10.2M  7.93  27.82  
stocdepth[Huang et al.(2016)Huang, Sun, Liu, Sedra, and Weinberger]  110  1.7M  5.23  24.58 
1202  10.2M  4.91    
preactResNet[He et al.(2016)He, Zhang, Ren, and Sun]  110  1.7M  6.37   
164  1.7M  5.46  24.33  
1001  10.2M  4.92(4.64)  22.71  
WRN (ours)  404  8.9M  4.53  21.18 
168  11.0M  4.27  20.43  
2810  36.5M  4.00  19.25 
In general, we observed that CIFAR mean/std preprocessing allows training wider and deeper networks with better accuracy, and achieved 18.3% on CIFAR100 using WRN4010 with parameters (table 9), giving a total improvement of 4.4% over ResNet1001 and establishing a new stateoftheart result on this dataset.
To summarize:
widening consistently improves performance across residual networks of different depth;
increasing both depth and width helps until the number of parameters becomes too high and stronger regularization is needed;
there doesn’t seem to be a regularization effect from very high depth in residual networks as wide networks with the same number of parameters as thin ones can learn same or better representations. Furthermore, wide networks can successfully learn with a 2 or more times larger number of parameters than thin ones, which would require doubling the depth of thin networks, making them infeasibly expensive to train.
We trained networks with dropout inserted into residual block between convolutions on all datasets. We used crossvalidation to determine dropout probability values, 0.3 on CIFAR and 0.4 on SVHN. Also, we didn’t have to increase number of training epochs compared to baseline networks without dropout.
Dropout decreases test error on CIFAR10 and CIFAR100 by 0.11% and 0.4% correnspondingly (over median of 5 runs and mean/std preprocessing) with WRN2810, and gives improvements with other ResNets as well (table 6). To our knowledge, that was the first result to approach 20% error on CIFAR100, even outperforming methods with heavy data augmentation. There is only a slight drop in accuracy with WRN164 on CIFAR10 which we speculate is due to the relatively small number of parameters.
We notice a disturbing effect in residual network training after the first learning rate drop when both loss and validation error suddenly start to go up and oscillate on high values until the next learning rate drop. We found out that it is caused by weight decay, however making it lower leads to a significant drop in accuracy. Interestingly, dropout partially removes this effect in most cases, see figures 2, 3.
The effect of dropout becomes more evident on SVHN. This is probably due to the fact that we don’t do any data augmentation and batch normalization overfits, so dropout adds a regularization effect. Evidence for this can be found on training curves in figure 3 where the loss without dropout drops to very low values. The results are presented in table 6. We observe significant improvements from using dropout on both thin and wide networks. Thin 50layer deep network even outperforms thin 152layer deep network with stochastic depth [Huang et al.(2016)Huang, Sun, Liu, Sedra, and Weinberger]. We additionally trained WRN168 with dropout on SVHN (table 9), which achieves 1.54% on SVHN  the best published result to our knowledge. Without dropout it achieves 1.81%.
Overall, despite the arguments of combining with batch normalization, dropout shows itself as an effective techique of regularization of thin and wide networks. It can be used to further improve results from widening, while also being complementary to it.
depth  dropout  CIFAR10  CIFAR100  SVHN  


16  4  5.02  24.03  1.85  
16  4  ✓  5.24  23.91  1.64 
28  10  4.00  19.25    
28  10  ✓  3.89  18.85   
52  1  6.43  29.89  2.08  
52  1  ✓  6.28  29.78  1.70 
For ImageNet we first experiment with nonbottleneck ResNet18 and ResNet34, trying to gradually increase their width from 1.0 to 3.0. The results are shown in table 7. Increasing width gradually increases accuracy of both networks, and networks with a comparable number of parameters achieve similar results, despite having different depth. Althouth these networks have a large number of parameters, they are outperfomed by bottleneck networks, which is probably either due to that bottleneck architecture is simply better suited for ImageNet classification task, or due to that this more complex task needs a deeper network. To test this, we took the ResNet50, and tried to make it wider by increasing inner layer width. With widening factor of 2.0 the resulting WRN502bottleneck outperforms ResNet152 having 3 times less layers, and being significantly faster. WRN502bottleneck is only slightly worse and almost faster than the bestperforming preactivation ResNet200, althouth having slightly more parameters (table 8). In general, we find that, unlike CIFAR, ImageNet networks need more width at the same depth to achieve the same accuracy. It is however clear that it is unnecessary to have residual networks with more than 50 layers due to computational reasons.
We didn’t try to train bigger bottleneck networks as 8GPU machines are needed for that.
width  1.0  1.5  2.0  3.0  
WRN18  top1,top5  30.4, 10.93  27.06, 9.0  25.58, 8.06  24.06, 7.33 
#parameters  11.7M  25.9M  45.6M  101.8M  
WRN34  top1,top5  26.77, 8.67  24.5, 7.58  23.39, 7.00  
#parameters  21.8M  48.6M  86.0M 
Model  top1 err, %  top5 err, %  #params  time/batch 16 
ResNet50  24.01  7.02  25.6M  49 
ResNet101  22.44  6.21  44.5M  82 
ResNet152  22.16  6.16  60.2M  115 
WRN502bottleneck  21.9  6.03  68.9M  93 
preResNet200  21.66  5.79  64.7M  154 
We also used WRN342 to participate in COCO 2016 object detection challenge, using a combination of MultiPathNet [Zagoruyko et al.(2016)Zagoruyko, Lerer, Lin, Pinheiro, Gross, Chintala, and Dollár] and LocNet [Gidaris and Komodakis(2016)]. Despite having only 34 layers, this model achieves stateoftheart single model performance, outperforming even ResNet152 and Inceptionv4based models.
Finally, in table 9 we summarize our best WRN results over various commonly used datasets.
Dataset  model  dropout  test perf. 
CIFAR10  WRN4010  ✓  3.8% 
CIFAR100  WRN4010  ✓  18.3% 
SVHN  WRN168  ✓  1.54% 
ImageNet (single crop)  WRN502bottleneck  21.9% top1, 5.79% top5  
COCO teststd  WRN342  35.2 mAP 
Thin and deep residual networks with small kernels are against the nature of GPU computations because of their sequential structure. Increasing width helps effectively balance computations in much more optimal way, so that wide networks are many times more efficient than thin ones as our benchmarks show. We use cudnn v5 and Titan X to measure forward+backward update times with minibatch size 32 for several networks, the results are in the figure 4. We show that our best CIFAR wide WRN2810 is 1.6 times faster than thin ResNet1001. Furthermore, wide WRN404, which has approximately the same accuracy as ResNet1001, is 8 times faster.
In all our experiments we use SGD with Nesterov momentum and crossentropy loss. The initial learning rate is set to 0.1, weight decay to 0.0005, dampening to 0, momentum to 0.9 and minibatch size to 128. On CIFAR learning rate dropped by 0.2 at 60, 120 and 160 epochs and we train for total 200 epochs. On SVHN initial learning rate is set to 0.01 and we drop it at 80 and 120 epochs by 0.1, training for total 160 epochs. Our implementation is based on Torch
[Collobert et al.(2011)Collobert, Kavukcuoglu, and Farabet]. We use [Massa(2016)] to reduce memory footprints of all our networks. For ImageNet experiments we used fb.resnet.torch implementation [Gross and Wilber(2016)]. Our code and models are available at https://github.com/szagoruyko/wideresidualnetworks.We presented a study on the width of residual networks as well as on the use of dropout in residual architectures. Based on this study, we proposed a wide residual network architecture that provides stateoftheart results on several commonly used benchmark datasets (including CIFAR10, CIFAR100, SVHN and COCO), as well as significant improvements on ImageNet. We demonstrate that wide networks with only 16 layers can significantly outperform 1000layer deep networks on CIFAR, as well as that 50layer outperform 152layer on ImageNet, thus showing that the main power of residual networks is in residual blocks, and not in extreme depth as claimed earlier. Also, wide residual networks are several times faster to train. We think that these intriguing findings will help further advances in research in deep neural networks.
We thank startup company VisionLabs and Eugenio Culurciello for giving us access to their clusters, without them ImageNet experiments wouldn’t be possible. We also thank Adam Lerer and Sam Gross for helpful discussions. Work supported by EC project FP7ICT611145 ROBOSPECT.
On the complexity of shallow and deep neural network classifiers.
In 22th European Symposium on Artificial Neural Networks, ESANN 2014, Bruges, Belgium, April 2325, 2014, 2014.Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS12)
, volume 22, pages 924–932, 2012.
Comments
There are no comments yet.