ResNeXt.pytorch
Reproduces ResNetV3 with pytorch
view repo
We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multibranch architecture that has only a few hyperparameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.
READ FULL TEXT VIEW PDF
ResNetbased architecture has been widely adopted as the speaker embeddi...
read it
In recent years, deep learning methods have been successfully applied to...
read it
The trend towards increasingly deep neural networks has been driven by a...
read it
Designing efficient network structures has always been the core content ...
read it
Deeper neural networks are more difficult to train. We present a residua...
read it
Various architectures (such as GoogLeNets, ResNets, and DenseNets) have ...
read it
Residual units are wildly used for alleviating optimization difficulties...
read it
Reproduces ResNetV3 with pytorch
Implementation of ResNext by chainer (Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431)
None
Implementation of a Wide Residual Network on Tensorflow for Image Classification. Trained and tested on Cifar10 dataset.
A PyTorch implementation of ResNeXt.
Research on visual recognition is undergoing a transition from “feature engineering” to “network engineering” [25, 24, 44, 34, 36, 38, 14]. In contrast to traditional handdesigned features (e.g., SIFT [29] and HOG [5]), features learned by neural networks from largescale data [33] require minimal human involvement during training, and can be transferred to a variety of recognition tasks [7, 10, 28]. Nevertheless, human effort has been shifted to designing better network architectures for learning representations.
Designing architectures becomes increasingly difficult with the growing number of hyperparameters (width^{2}^{2}2Width refers to the number of channels in a layer.
, filter sizes, strides,
etc.), especially when there are many layers. The VGGnets [36] exhibit a simple yet effective strategy of constructing very deep networks: stacking building blocks of the same shape. This strategy is inherited by ResNets [14] which stack modules of the same topology. This simple rule reduces the free choices of hyperparameters, and depth is exposed as an essential dimension in neural networks. Moreover, we argue that the simplicity of this rule may reduce the risk of overadapting the hyperparameters to a specific dataset. The robustness of VGGnets and ResNets has been proven by various visual recognition tasks [7, 10, 9, 28, 31, 14] and by nonvisual tasks involving speech [42, 30] and language [4, 41, 20].Unlike VGGnets, the family of Inception models [38, 17, 39, 37] have demonstrated that carefully designed topologies are able to achieve compelling accuracy with low theoretical complexity. The Inception models have evolved over time [38, 39], but an important common property is a splittransformmerge
strategy. In an Inception module, the input is split into a few lowerdimensional embeddings (by 1
1 convolutions), transformed by a set of specialized filters (33, 55, etc.), and merged by concatenation. It can be shown that the solution space of this architecture is a strict subspace of the solution space of a single large layer (e.g., 55) operating on a highdimensional embedding. The splittransformmerge behavior of Inception modules is expected to approach the representational power of large and dense layers, but at a considerably lower computational complexity.Despite good accuracy, the realization of Inception models has been accompanied with a series of complicating factors — the filter numbers and sizes are tailored for each individual transformation, and the modules are customized stagebystage. Although careful combinations of these components yield excellent neural network recipes, it is in general unclear how to adapt the Inception architectures to new datasets/tasks, especially when there are many factors and hyperparameters to be designed.
In this paper, we present a simple architecture which adopts VGG/ResNets’ strategy of repeating layers, while exploiting the splittransformmerge strategy in an easy, extensible way. A module in our network performs a set of transformations, each on a lowdimensional embedding, whose outputs are aggregated by summation. We pursuit a simple realization of this idea — the transformations to be aggregated are all of the same topology (e.g., Fig. 1 (right)). This design allows us to extend to any large number of transformations without specialized designs.
Interestingly, under this simplified situation we show that our model has two other equivalent forms (Fig. 3). The reformulation in Fig. 3(b) appears similar to the InceptionResNet module [37] in that it concatenates multiple paths; but our module differs from all existing Inception modules in that all our paths share the same topology and thus the number of paths can be easily isolated as a factor to be investigated. In a more succinct reformulation, our module can be reshaped by Krizhevsky et al.’s grouped convolutions [24] (Fig. 3(c)), which, however, had been developed as an engineering compromise.
We empirically demonstrate that our aggregated transformations outperform the original ResNet module, even under the restricted condition of maintaining computational complexity and model size — e.g., Fig. 1(right) is designed to keep the FLOPs complexity and number of parameters of Fig. 1(left). We emphasize that while it is relatively easy to increase accuracy by increasing capacity (going deeper or wider), methods that increase accuracy while maintaining (or reducing) complexity are rare in the literature.
Our method indicates that cardinality (the size of the set of transformations) is a concrete, measurable dimension that is of central importance, in addition to the dimensions of width and depth. Experiments demonstrate that increasing cardinality is a more effective way of gaining accuracy than going deeper or wider, especially when depth and width starts to give diminishing returns for existing models.
Our neural networks, named ResNeXt (suggesting the next dimension), outperform ResNet101/152 [14], ResNet200 [15], Inceptionv3 [39], and InceptionResNetv2 [37] on the ImageNet classification dataset. In particular, a 101layer ResNeXt is able to achieve better accuracy than ResNet200 [15] but has only 50% complexity. Moreover, ResNeXt exhibits considerably simpler designs than all Inception models. ResNeXt was the foundation of our submission to the ILSVRC 2016 classification task, in which we secured second place. This paper further evaluates ResNeXt on a larger ImageNet5K set and the COCO object detection dataset [27], showing consistently better accuracy than its ResNet counterparts. We expect that ResNeXt will also generalize well to other visual (and nonvisual) recognition tasks.
Multibranch convolutional networks. The Inception models [38, 17, 39, 37] are successful multibranch architectures where each branch is carefully customized. ResNets [14] can be thought of as twobranch networks where one branch is the identity mapping. Deep neural decision forests [22] are treepatterned multibranch networks with learned splitting functions.
Grouped convolutions. The use of grouped convolutions dates back to the AlexNet paper [24], if not earlier. The motivation given by Krizhevsky et al. [24]
is for distributing the model over two GPUs. Grouped convolutions are supported by Caffe
[19], Torch
[3], and other libraries, mainly for compatibility of AlexNet. To the best of our knowledge, there has been little evidence on exploiting grouped convolutions to improve accuracy. A special case of grouped convolutions is channelwise convolutions in which the number of groups is equal to the number of channels. Channelwise convolutions are part of the separable convolutions in [35].Compressing convolutional networks. Decomposition (at spatial [6, 18] and/or channel [6, 21, 16] level) is a widely adopted technique to reduce redundancy of deep convolutional networks and accelerate/compress them. Ioannou et al. [16] present a “root”patterned network for reducing computation, and branches in the root are realized by grouped convolutions. These methods [6, 18, 21, 16] have shown elegant compromise of accuracy with lower complexity and smaller model sizes. Instead of compression, our method is an architecture that empirically shows stronger representational power.
Ensembling. Averaging a set of independently trained networks is an effective solution to improving accuracy [24], widely adopted in recognition competitions [33]. Veit et al. [40] interpret a single ResNet as an ensemble of shallower networks, which results from ResNet’s additive behaviors [15]. Our method harnesses additions to aggregate a set of transformations. But we argue that it is imprecise to view our method as ensembling, because the members to be aggregated are trained jointly, not independently.
We adopt a highly modularized design following VGG/ResNets. Our network consists of a stack of residual blocks. These blocks have the same topology, and are subject to two simple rules inspired by VGG/ResNets: (i) if producing spatial maps of the same size, the blocks share the same hyperparameters (width and filter sizes), and (ii) each time when the spatial map is downsampled by a factor of 2, the width of the blocks is multiplied by a factor of 2. The second rule ensures that the computational complexity, in terms of FLOPs (floatingpoint operations, in # of multiplyadds), is roughly the same for all blocks.
With these two rules, we only need to design a template module, and all modules in a network can be determined accordingly. So these two rules greatly narrow down the design space and allow us to focus on a few key factors. The networks constructed by these rules are in Table 1.
stage  output  ResNet50  ResNeXt50 (324d) 
conv1  112112  77, 64, stride 2  77, 64, stride 2 
conv2  5656 
3 3 max pool, stride 2 
33 max pool, stride 2 
3  3  
conv3  2828  4  4 
conv4  1414  6  6 
conv5  77  3  3 
11  global average pool  global average pool  
1000d fc, softmax  1000d fc, softmax  
# params.  25.5  25.0  
FLOPs  4.1  4.2 
The simplest neurons in artificial neural networks perform inner product (weighted sum), which is the elementary transformation done by fullyconnected and convolutional layers. Inner product can be thought of as a form of aggregating transformation:
(1) 
where is a
channel input vector to the neuron and
is a filter’s weight for the th channel. This operation (usually including some output nonlinearity) is referred to as a “neuron”. See Fig. 2.The above operation can be recast as a combination of splitting, transforming, and aggregating. (i) Splitting: the vector is sliced as a lowdimensional embedding, and in the above, it is a singledimension subspace . (ii) Transforming: the lowdimensional representation is transformed, and in the above, it is simply scaled: . (iii) Aggregating: the transformations in all embeddings are aggregated by .
Given the above analysis of a simple neuron, we consider replacing the elementary transformation () with a more generic function, which in itself can also be a network. In contrast to “NetworkinNetwork” [26] that turns out to increase the dimension of depth, we show that our “NetworkinNeuron” expands along a new dimension.
Formally, we present aggregated transformations as:
(2) 
where can be an arbitrary function. Analogous to a simple neuron, should project into an (optionally lowdimensional) embedding and then transform it.
In Eqn.(2), is the size of the set of transformations to be aggregated. We refer to as cardinality [2]. In Eqn.(2) is in a position similar to in Eqn.(1), but need not equal and can be an arbitrary number. While the dimension of width is related to the number of simple transformations (inner product), we argue that the dimension of cardinality controls the number of more complex transformations. We show by experiments that cardinality is an essential dimension and can be more effective than the dimensions of width and depth.
In this paper, we consider a simple way of designing the transformation functions: all ’s have the same topology. This extends the VGGstyle strategy of repeating layers of the same shape, which is helpful for isolating a few factors and extending to any large number of transformations. We set the individual transformation to be the bottleneckshaped architecture [14], as illustrated in Fig. 1 (right). In this case, the first 11 layer in each produces the lowdimensional embedding.
The aggregated transformation in Eqn.(2) serves as the residual function [14] (Fig. 1 right):
(3) 
where is the output.
Relation to InceptionResNet
. Some tensor manipulations show that the module in Fig.
1(right) (also shown in Fig. 3(a)) is equivalent to Fig. 3(b).^{3}^{3}3An informal but descriptive proof is as follows. Note the equality: where is horizontal concatenation and is vertical concatenation. Let be the weight of the last layer and be the output response of the secondlast layer in the block. In the case of , the elementwise addition in Fig. 3(a) is , the weight of the last layer in Fig. 3(b) is , and the concatenation of outputs of secondlast layers in Fig. 3(b) is . Fig. 3(b) appears similar to the InceptionResNet [37] block in that it involves branching and concatenating in the residual function. But unlike all Inception or InceptionResNet modules, we share the same topology among the multiple paths. Our module requires minimal extra effort designing each path.Relation to Grouped Convolutions. The above module becomes more succinct using the notation of grouped convolutions [24].^{4}^{4}4In a group conv layer [24], input and output channels are divided into groups, and convolutions are separately performed within each group. This reformulation is illustrated in Fig. 3(c). All the lowdimensional embeddings (the first 11 layers) can be replaced by a single, wider layer (e.g., 11, 128d in Fig 3(c)). Splitting is essentially done by the grouped convolutional layer when it divides its input channels into groups. The grouped convolutional layer in Fig. 3(c) performs 32 groups of convolutions whose input and output channels are 4dimensional. The grouped convolutional layer concatenates them as the outputs of the layer. The block in Fig. 3(c) looks like the original bottleneck residual block in Fig. 1(left), except that Fig. 3(c) is a wider but sparsely connected module.
We note that the reformulations produce nontrivial topologies only when the block has depth 3. If the block has depth 2 (e.g., the basic block in [14]), the reformulations lead to trivially a wide, dense module. See the illustration in Fig. 4.
Discussion. We note that although we present reformulations that exhibit concatenation (Fig. 3(b)) or grouped convolutions (Fig. 3(c)), such reformulations are not always applicable for the general form of Eqn.(3), e.g., if the transformation takes arbitrary forms and are heterogenous. We choose to use homogenous forms in this paper because they are simpler and extensible. Under this simplified case, grouped convolutions in the form of Fig. 3(c) are helpful for easing implementation.
Our experiments in the next section will show that our models improve accuracy when maintaining the model complexity and number of parameters. This is not only interesting in practice, but more importantly, the complexity and number of parameters represent inherent capacity of models and thus are often investigated as fundamental properties of deep networks [8].
When we evaluate different cardinalities while preserving complexity, we want to minimize the modification of other hyperparameters. We choose to adjust the width of the bottleneck (e.g., 4d in Fig 1(right)), because it can be isolated from the input and output of the block. This strategy introduces no change to other hyperparameters (depth or input/output width of blocks), so is helpful for us to focus on the impact of cardinality.
In Fig. 1(left), the original ResNet bottleneck block [14] has 70k parameters and proportional FLOPs (on the same feature map size). With bottleneck width , our template in Fig. 1(right) has:
(4) 
parameters and proportional FLOPs. When and , Eqn.(4) 70k. Table 2 shows the relationship between cardinality and bottleneck width .
Because we adopt the two rules in Sec. 3.1, the above approximate equality is valid between a ResNet bottleneck block and our ResNeXt on all stages (except for the subsampling layers where the feature maps size changes). Table 1 compares the original ResNet50 and our ResNeXt50 that is of similar capacity.^{5}^{5}5The marginally smaller number of parameters and marginally higher FLOPs are mainly caused by the blocks where the map sizes change. We note that the complexity can only be preserved approximately, but the difference of the complexity is minor and does not bias our results.
cardinality  1  2  4  8  32 

width of bottleneck  64  40  24  14  4 
width of group conv.  64  80  96  112  128 
Our implementation follows [14] and the publicly available code of fb.resnet.torch [11]. On the ImageNet dataset, the input image is 224224 randomly cropped from a resized image using the scale and aspect ratio augmentation of [38] implemented by [11]. The shortcuts are identity connections except for those increasing dimensions which are projections (type B in [14]). Downsampling of conv3, 4, and 5 is done by stride2 convolutions in the 33 layer of the first block in each stage, as suggested in [11]. We use SGD with a minibatch size of 256 on 8 GPUs (32 per GPU). The weight decay is 0.0001 and the momentum is 0.9. We start from a learning rate of 0.1, and divide it by 10 for three times using the schedule in [11]. We adopt the weight initialization of [13]. In all ablation comparisons, we evaluate the error on the single 224224 center crop from an image whose shorter side is 256.
Our models are realized by the form of Fig. 3
(c). We perform batch normalization (BN)
[17] right after the convolutions in Fig. 3(c).^{6}^{6}6With BN, for the equivalent form in Fig. 3(a), BN is employed after aggregating the transformations and before adding to the shortcut.ReLU is performed right after each BN, expect for the output of the block where ReLU is performed after the adding to the shortcut, following [14].We conduct ablation experiments on the 1000class ImageNet classification task [33]. We follow [14] to construct 50layer and 101layer residual networks. We simply replace all blocks in ResNet50/101 with our blocks.
Notations. Because we adopt the two rules in Sec. 3.1, it is sufficient for us to refer to an architecture by the template. For example, Table 1 shows a ResNeXt50 constructed by a template with cardinality and bottleneck width 4d (Fig. 3). This network is denoted as ResNeXt50 (324d) for simplicity. We note that the input/output width of the template is fixed as 256d (Fig. 3), and all widths are doubled each time when the feature map is subsampled (see Table 1).
Cardinality vs. Width. We first evaluate the tradeoff between cardinality and bottleneck width, under preserved complexity as listed in Table 2. Table 3 shows the results and Fig. 5 shows the curves of error vs
. epochs. Comparing with ResNet50 (Table
3 top and Fig. 5 left), the 324d ResNeXt50 has a validation error of 22.2%, which is 1.7% lower than the ResNet baseline’s 23.9%. With cardinality increasing from 1 to 32 while keeping complexity, the error rate keeps reducing. Furthermore, the 324d ResNeXt also has a much lower training error than the ResNet counterpart, suggesting that the gains are not from regularization but from stronger representations.Similar trends are observed in the case of ResNet101 (Fig. 5 right, Table 3 bottom), where the 324d ResNeXt101 outperforms the ResNet101 counterpart by 0.8%. Although this improvement of validation error is smaller than that of the 50layer case, the improvement of training error is still big (20% for ResNet101 and 16% for 324d ResNeXt101, Fig. 5 right). In fact, more training data will enlarge the gap of validation error, as we show on an ImageNet5K set in the next subsection.
Table 3 also suggests that with complexity preserved, increasing cardinality at the price of reducing width starts to show saturating accuracy when the bottleneck width is small. We argue that it is not worthwhile to keep reducing width in such a tradeoff. So we adopt a bottleneck width no smaller than 4d in the following.
setting  top1 error (%)  

ResNet50  1 64d  23.9 
ResNeXt50  2 40d  23.0 
ResNeXt50  4 24d  22.6 
ResNeXt50  8 14d  22.3 
ResNeXt50  32 4d  22.2 
ResNet101  1 64d  22.0 
ResNeXt101  2 40d  21.7 
ResNeXt101  4 24d  21.4 
ResNeXt101  8 14d  21.3 
ResNeXt101  32 4d  21.2 
setting  top1 err (%)  top5 err (%)  
1 complexity references:  
ResNet101  1 64d  22.0  6.0 
ResNeXt101  32 4d  21.2  5.6 
2 complexity models follow:  
ResNet200 [15]  1 64d  21.7  5.8 
ResNet101, wider  1 100d  21.3  5.7 
ResNeXt101  2 64d  20.7  5.5 
ResNeXt101  64 4d  20.4  5.3 
Increasing Cardinality vs. Deeper/Wider. Next we investigate increasing complexity by increasing cardinality or increasing depth or width. The following comparison can also be viewed as with reference to 2 FLOPs of the ResNet101 baseline. We compare the following variants that have 15 billion FLOPs. (i) Going deeper to 200 layers. We adopt the ResNet200 [15] implemented in [11]. (ii) Going wider by increasing the bottleneck width. (iii) Increasing cardinality by doubling .
Table 4 shows that increasing complexity by 2 consistently reduces error vs. the ResNet101 baseline (22.0%). But the improvement is small when going deeper (ResNet200, by 0.3%) or wider (wider ResNet101, by 0.7%).
On the contrary, increasing cardinality shows much better results than going deeper or wider. The 264d ResNeXt101 (i.e., doubling on 164d ResNet101 baseline and keeping the width) reduces the top1 error by 1.3% to 20.7%. The 644d ResNeXt101 (i.e., doubling on 324d ResNeXt101 and keeping the width) reduces the top1 error to 20.4%.
We also note that 324d ResNet101 (21.2%) performs better than the deeper ResNet200 and the wider ResNet101, even though it has only 50% complexity. This again shows that cardinality is a more effective dimension than the dimensions of depth and width.
Residual connections. The following table shows the effects of the residual (shortcut) connections:
setting  w/ residual  w/o residual  

ResNet50  1 64d  23.9  31.2 
ResNeXt50  32 4d  22.2  26.1 
Removing shortcuts from the ResNeXt50 increases the error by 3.9 points to 26.1%. Removing shortcuts from its ResNet50 counterpart is much worse (31.2%). These comparisons suggest that the residual connections are helpful for optimization, whereas aggregated transformations are stronger representations, as shown by the fact that they perform consistently better than their counterparts with or without residual connections.
Performance. For simplicity we use Torch’s builtin grouped convolution implementation, without special optimization. We note that this implementation was bruteforce and not parallelizationfriendly. On 8 GPUs of NVIDIA M40, training 324d ResNeXt101 in Table 3 takes 0.95s per minibatch, vs. 0.70s of ResNet101 baseline that has similar FLOPs. We argue that this is a reasonable overhead. We expect carefully engineered lowerlevel implementation (e.g., in CUDA) will reduce this overhead. We also expect that the inference time on CPUs will present less overhead. Training the 2complexity model (644d ResNeXt101) takes 1.7s per minibatch and 10 days total on 8 GPUs.
Comparisons with stateoftheart results. Table 5 shows more results of singlecrop testing on the ImageNet validation set. In addition to testing a 224224 crop, we also evaluate a 320320 crop following [15]. Our results compare favorably with ResNet, Inceptionv3/v4, and InceptionResNetv2, achieving a singlecrop top5 error rate of 4.4%. In addition, our architecture design is much simpler than all Inception models, and requires considerably fewer hyperparameters to be set by hand.
ResNeXt is the foundation of our entries to the ILSVRC 2016 classification task, in which we achieved 2 place. We note that many models (including ours) start to get saturated on this dataset after using multiscale and/or multicrop testing. We had a singlemodel top1/top5 error rates of 17.7%/3.7% using the multiscale dense testing in [14], on par with InceptionResNetv2’s singlemodel results of 17.8%/3.7% that adopts multiscale, multicrop testing. We had an ensemble result of 3.03% top5 error on the test set, on par with the winner’s 2.99% and Inceptionv4/InceptionResNetv2’s 3.08% [37].
224224  320320 / 299299  
top1 err  top5 err  top1 err  top5 err  
ResNet101 [14]  22.0  6.0     
ResNet200 [15]  21.7  5.8  20.1  4.8 
Inceptionv3 [39]      21.2  5.6 
Inceptionv4 [37]      20.0  5.0 
InceptionResNetv2 [37]      19.9  4.9 
ResNeXt101 (64 4d)  20.4  5.3  19.1  4.4 
5Kway classification  1Kway classification  
setting  top1  top5  top1  top5  
ResNet50  1 64d  45.5  19.4  27.1  8.2 
ResNeXt50  32 4d  42.3  16.8  24.4  6.6 
ResNet101  1 64d  42.4  16.9  24.2  6.8 
ResNeXt101  32 4d  40.1  15.1  22.2  5.7 
The performance on ImageNet1K appears to saturate. But we argue that this is not because of the capability of the models but because of the complexity of the dataset. Next we evaluate our models on a larger ImageNet subset that has 5000 categories.
Our 5K dataset is a subset of the full ImageNet22K set [33]. The 5000 categories consist of the original ImageNet1K categories and additional 4000 categories that have the largest number of images in the full ImageNet set. The 5K set has 6.8 million images, about 5 of the 1K set. There is no official train/val split available, so we opt to evaluate on the original ImageNet1K validation set. On this 1Kclass val set, the models can be evaluated as a 5Kway classification task (all labels predicted to be the other 4K classes are automatically erroneous) or as a 1Kway classification task (softmax is applied only on the 1K classes) at test time.
The implementation details are the same as in Sec. 4. The 5Ktraining models are all trained from scratch, and are trained for the same number of minibatches as the 1Ktraining models (so 1/5 epochs). Table 6 and Fig. 6 show the comparisons under preserved complexity. ResNeXt50 reduces the 5Kway top1 error by 3.2% comparing with ResNet50, and ResNetXt101 reduces the 5Kway top1 error by 2.3% comparing with ResNet101. Similar gaps are observed on the 1Kway error. These demonstrate the stronger representational power of ResNeXt.
Moreover, we find that the models trained on the 5K set (with 1Kway error 22.2%/5.7% in Table 6) perform competitively comparing with those trained on the 1K set (21.2%/5.6% in Table 3
), evaluated on the same 1Kway classification task on the validation set. This result is achieved without increasing the training time (due to the same number of minibatches) and without finetuning. We argue that this is a promising result, given that the training task of classifying 5K categories is a more challenging one.
We conduct more experiments on CIFAR10 and 100 datasets [23]. We use the architectures as in [14] and replace the basic residual block by the bottleneck template of . Our networks start with a single 33 conv layer, followed by 3 stages each having 3 residual blocks, and end with average pooling and a fullyconnected classifier (total 29layer deep), following [14]. We adopt the same translation and flipping data augmentation as [14]. Implementation details are in the appendix.
We compare two cases of increasing complexity based on the above baseline: (i) increase cardinality and fix all widths, or (ii) increase width of the bottleneck and fix cardinality . We train and evaluate a series of networks under these changes. Fig. 7 shows the comparisons of test error rates vs. model sizes. We find that increasing cardinality is more effective than increasing width, consistent to what we have observed on ImageNet1K. Table 7 shows the results and model sizes, comparing with the Wide ResNet [43] which is the best published record. Our model with a similar model size (34.4M) shows results better than Wide ResNet. Our larger method achieves 3.58% test error (average of 10 runs) on CIFAR10 and 17.31% on CIFAR100. To the best of our knowledge, these are the stateoftheart results (with similar data augmentation) in the literature including unpublished technical reports.
# params  CIFAR10  CIFAR100  

Wide ResNet [43]  36.5M  4.17  20.50 
ResNeXt29, 864d  34.4M  3.65  17.77 
ResNeXt29, 1664d  68.1M  3.58  17.31 





ResNet50  1 64d  47.6  26.5  
ResNeXt50  32 4d  49.7  27.5  
ResNet101  1 64d  51.1  29.8  
ResNeXt101  32 4d  51.9  30.0 
Next we evaluate the generalizability on the COCO object detection set [27]. We train the models on the 80k training set plus a 35k val subset and evaluate on a 5k val subset (called minival), following [1]. We evaluate the COCOstyle Average Precision (AP) as well as AP@IoU=0.5 [27]. We adopt the basic Faster RCNN [32] and follow [14] to plug ResNet/ResNeXt into it. The models are pretrained on ImageNet1K and finetuned on the detection set. Implementation details are in the appendix.
Table 8 shows the comparisons. On the 50layer baseline, ResNeXt improves AP@0.5 by 2.1% and AP by 1.0%, without increasing complexity. ResNeXt shows smaller improvements on the 101layer baseline. We conjecture that more training data will lead to a larger gap, as observed on the ImageNet5K set.
It is also worth noting that recently ResNeXt has been adopted in Mask RCNN [12] that achieves stateoftheart results on COCO instance segmentation and object detection tasks.
S.X. and Z.T.’s research was partly supported by NSF IIS1618477. The authors would like to thank TsungYi Lin and Priya Goyal for valuable discussions.
We train the models on the 50k training set and evaluate on the 10k test set. The input image is 32
32 randomly cropped from a zeropadded 40
40 image or its flipping, following [14]. No other data augmentation is used. The first layer is 33 conv with 64 filters. There are 3 stages each having 3 residual blocks, and the output map size is 32, 16, and 8 for each stage [14]. The network ends with a global average pooling and a fullyconnected layer. Width is increased by 2 when the stage changes (downsampling), as in Sec. 3.1. The models are trained on 8 GPUs with a minibatch size of 128, with a weight decay of 0.0005 and a momentum of 0.9. We start with a learning rate of 0.1 and train the models for 300 epochs, reducing the learning rate at the 150th and 225th epoch. Other implementation details are as in [11].We adopt the Faster RCNN system [32]. For simplicity we do not share the features between RPN and Fast RCNN. In the RPN step, we train on 8 GPUs with each GPU holding 2 images per minibatch and 256 anchors per image. We train the RPN step for 120k minibatches at a learning rate of 0.02 and next 60k at 0.002. In the Fast RCNN step, we train on 8 GPUs with each GPU holding 1 image and 64 regions per minibatch. We train the Fast RCNN step for 120k minibatches at a learning rate of 0.005 and next 60k at 0.0005, We use a weight decay of 0.0001 and a momentum of 0.9. Other implementation details are as in https://github.com/rbgirshick/pyfasterrcnn.
Insideoutside net: Detecting objects in context with skip pooling and recurrent neural networks.
In CVPR, 2016.Torch: a modular machine learning software library.
Technical report, Idiap, 2002.Speeding up convolutional neural networks with low rank expansions.
In BMVC, 2014.Rethinking the inception architecture for computer vision.
In CVPR, 2016.
Comments
There are no comments yet.