ResNetX: a more disordered and deeper network architecture

12/18/2019 ∙ by Wenfeng Feng, et al. ∙ Henan Polytechnic University 0

Designing efficient network structures has always been the core content of neural network research. ResNet and its variants have proved to be efficient in architecture. However, how to theoretically character the influence of network structure on performance is still vague. With the help of techniques in complex networks, We here provide a natural yet efficient extension to ResNet by folding its backbone chain. Our architecture has two structural features when being mapped to directed acyclic graphs: First is a higher degree of the disorder compared with ResNet, which let ResNetX explore a larger number of feature maps with different sizes of receptive fields. Second is a larger proportion of shorter paths compared to ResNet, which improves the direct flow of information through the entire network. Our architecture exposes a new dimension, namely "fold depth", in addition to existing dimensions of depth, width, and cardinality. Our architecture is a natural extension to ResNet, and can be integrated with existing state-of-the-art methods with little effort. Image classification results on CIFAR-10 and CIFAR-100 benchmarks suggested that our new network architecture performs better than ResNet.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An artificial neural network is a computing system made up of many simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs [1]

. How the processing elements are connected is believed to be crucial for the performance of an artificial neural network. Recent advances in computer vision models also partially confirmed such hyperthesis, e.g., the effectiveness of ResNet

[6, 7] and DenseNet [9] and the models in neural architecture search [15, 17, 19, 31, 32] is largely due to how they are connected.

In spite of the architecture of neural networks is critically important, there is still no consistent way to model it till now. This makes it impossible to theoretically measure the impact of network structure on their performance, and also makes the design of network architecture is based on intuition and more like try and error. Even if recent models generated by automatically searching in large architecture space are also a kind of try and error method.

On the other hand, the theory of complex networks has been used to model networked systems for decades [16]. If we consider neural networks as networked systems, we can use the theory of complex networks to model neural networks and to characterize the impact of network structure on their performance. Recently, Testolin et al. [26]

studied deep belief networks using techniques in the field of complex networks; Xie et al.

[29] used three classical random graph models which are theoretical basics of complex networks to generate randomly connected neural network structures.

We here first provide a natural yet efficient extension to original residual networks. By mapping the newly designed convolutional neural network architectures to directed acyclic graphs, we show that they have two structural features in terms of complex networks, that bring the high performance of the model. The first structural feature is they have a less average length of paths and thus a larger number of effective paths, that lead to the more direct flowing of information throughout the entire network. The second structural feature is that those directed acyclic graphs have a high degree of disorder, which means nodes tend to connect to other nodes with different levels, that further improve the multi-scale representation of the model.

2 Related work

2.1 Network architectures

The exploration of network structures has been a part of neural network research since their initial discovery. Recently, the structure of convolutional neural networks has been explored from their depth [20, 6, 7, 9], width [30], cardinality [28], etc. The building blocks of network architectures also extended from residual blocks [6, 7, 30, 28] to many variants of efficient blocks [4, 25, 8, 18, 18, 22], such as depthwise separable convolutional blocks, etc.

2.2 Effective paths in neural networks

Veit et al. [27] interpreted residual networks as a collection of many paths of differing lengths. The gradient magnitude of a path decreases exponentially with the number of blocks it went through in the backward pass. The total gradient magnitude contributed by paths of each length can be calculated by multiplying the number of paths with that length, and the expected gradient magnitude of the paths with the same length. Thus most of the total gradient magnitude is contributed by paths of shorter length even though they constitute only a tiny part of all paths through the network. These shorter paths are called effective paths [27]. The larger the number of effective paths, the better performance, with other conditions unchanged.

2.3 Degree of order of DAGs: trophic coherence

Directed Acyclic Graphs (DAGs) is a representation of partially ordered sets [12]. The extent to which the nodes of a DAG are organized in levels can be measured by trophic coherence, a parameter that is originally defined in food webs and then shown to be closely related to many structural and dynamical aspects of complex systems [11, 5, 13].

For a directed acyclic graph given by adjacency matrix , with elements if there is a directed edge from node to node , and if not. The in- and out-degrees of node are and , respectively. The first node () can never have ingoing edges, thus . Similarly, the last node () can never have outgoing edges, thus .

The trophic level of nodes is defined as

(1)

if , or if . In other words, the trophic level of the first node is by convention, while other nodes are assigned the mean trophic level of their in-neighbors, plus one. Thus, for any DAG, the trophic level of each node can be easily obtained by solving the linear system of Eq. 1. Johnson et al. [11] characterize each edge in an network with a trophic distance: . They then consider the distribution of trophic distances over the network, . The homogeneity of

is called trophic coherence: the more similar the trophic distances of all the edges, the more coherent the network. As a measure of coherence, one can simply use the standard deviation of

, which is referred to as an incoherence parameter: .

2.4 Multi-scale feature representation

The multi-scale representation ability of convolutional neural networks is achieved and improved by using convolutional layers with different kernel sizes (e.g., InceptionNets [22, 23, 24]), by utilizing features with different resolutions [2, 3], and by combining features with different sizes of receptive field [6, 7, 9]. We argue that the degree of disorder of convolutional neural network structures improves their multi-scale representation ability.

3 ResNetX

Consider a single image that is passed through a convolutional network. The network comprises

layers, each of which implements a non-linear transformation

, where indexes the layer.

can be a composite function of operations such as Batch Normalization (BN)

[10]

, rectified linear units (ReLU), Pooling

[14], or Convolution (Conv). We denote the output of the layer as .

ResNet [6, 7] add a skip-connection that bypasses the non-linear transformations with an identity function:

(2)

An advantage of ResNet is that the gradient can flow directly through the identity function (dashed lines in Fig. 1a) from later layers to the earlier layers.

3.1 ResNetX design

We provide a natural yet efficient extension to ResNet. Our intuition is simple, we fold the backbone chain (all the non-linear transforms) of ResNet, in order for the direct chain (all the identity functions) to traceback a larger number of previous images with different sizes of receptive fields. Thus, we introduce a new parameter to represent the fold depth. The deeper the fold, the larger the number of previous images of different sizes of receptive fields the model can traceback. When , our model is just the original ResNet. In order to distinguish our model with ResNet, we name it ResNetX, where the character ”X” at the end is a symbol of the new parameter, i.e. the fold depth. Fig. 1 illustrated the architectures of original ResNet(1a), our ResNetX model when (1b) and (1c), respectively.

Compared with ResNet, our architectures traceback a larger number of previous images that have different sizes of receptive fields, thus promote the fusion of a larger number of images with different receptive fields and improve the multi-scale representation ability. Moreover, our architectures increases the number of ”direct” chains from one (dashed line in Fig. 1a) in ResNet to two (dashed lines in Fig. 1b), three (dashed lines in Fig. 1c) and more, which decrease the average length of paths through the entire network, increase the number of effective paths, and thus promote the directly propagation of information along with the ”direct” chains. We argue that these two features lead to the effectiveness of our model.

Figure 1: Diagrams of network architectures, (a) for ResNet, (b) for ResNetX when , (c) for ResNetX when . The double-line circles represent external input image data, the circles with plus signs inside represent summation on all ingoing image data. The dashed lines represent identity functions on image data, while the solid lines (with on them) represent non-linear transformations on image data.

Our model can be formally expressed by the following steps and equations. First, the output of the current layer , , equal to the summation of the non-linear transformation of the output of the previous layer and the output of layer , :

(3)

The layer difference is determined by the current layer index and the fold depth . When the current layer index is less than the fold depth, we set like in ResNet, to accumulate enough outputs that could be traced by the later layers, i.e.

(4)

Otherwise, we first divide the current layer index by a number to get the remainder

(5)

After that, if the remainder is between , the layer difference equal to , i.e.

(6)

Otherwise, we further compute the second remainder

(7)

and calculate the layer difference as .

In summary, the layer difference can be computed by the following equation:

(8)

3.2 Comparison between ResNetX and ResNet

In order to compare the architectures of ResNetX and ResNet, we first need to map both of them to directed acyclic graphs. The mapping from the architectures of neural networks to general graphs is flexible. We here intentionally chose a simple mapping, i.e. nodes in graphs represent non-linear transformations among data, while edges in graphs represent data flows which send data from one node to another node. Such mapping separates the impact of network structure on performance from the impact of node operations on performance since all the weights in neural networks are reflected in nodes of graphs.

Under the above mapping rule, the architecture of ReNet is mapped to a complete directed acyclic graph (Fig. 4

). For a complete directed acyclic graph, the distribution of all path lengths from the first node to the last node follows a Binomial distribution, which conforms to results in

[27]. A complete directed acyclic graph also has a high value of incoherence parameter , which indicates a high degree of disorder.

The architectures of ResNetX are mapped to different directed acyclic graphs according to different values of the fold depth . Fig. 4 and 4 are two examples for and respectively.

We compared the distribution of path lengths of ResNet and ResNetX in Fig. 5. As shown in Fig. 5, the proportion of shorter paths of ResNetX are all larger than that of ResNetX, and increase with the fold depth . We also computed the values of incoherence parameter of ResNetX when and compare them with the value of incoherence parameter of ResNet. As shown in Tab. 1, all the values of incoherence parameter of ResNetX are larger than that of ResNetX, and increase with the fold depth .

The comparison of path lengths and incoherence parameter between ResNetX and ResNet show that ResNetX have a larger proportion of shorter paths and a higher degree of disorder than ResNet, and we argue that two features bring better performance of ResNetX.

Figure 3: DAG mapping from ResNetX when .
Figure 2: DAG mapping from ResNet. The square nodes with F inside represent non-linear transformations among data, the dashed lines represent data flows among nodes.
Figure 3: DAG mapping from ResNetX when .
Figure 4: DAG mapping from ResNetX when .
Figure 2: DAG mapping from ResNet. The square nodes with F inside represent non-linear transformations among data, the dashed lines represent data flows among nodes.
Figure 5: Comparison of path lengths of ResNet and ResNetX (

). X axis is the path length, Y axis is the cumulative distribution function (CDF) of path lengths.

Model Incoherence parameter ()
ResNet 0.8523
ResNetX () 0.8904
ResNetX () 0.8950
ResNetX () 0.9124
Table 1: Comparison of incoherence parameters of ResNet and ResNetX.

4 Experiments

Limited to experimental conditions, we don’t have computing resources to train large-scale data sets. We have to plan carefully to save very limited computing resources. Thus, we only consider parameters that are critical for the comparison between ResNetX and ResNet, and keep all other parameters constant. Since ResNetX only changes the connecting way of residual connections among earlier and later layers in ResNet, and change nothing inside the layers, it should mainly change the influence of network depth on performance and is orthogonal to other aspects of architecture. Therefore, we keep all other parameters constant, and only change network depth to evaluate its effect on performance.

We evaluate ResNetX on classification task on CIFAR-10, CIFAR-100 datasets and compare with ResNet. We choose the basic building block of ResNet and the depthwise separable convolutional block in xception network [4] as the building block of ResNetX, respectively.

4.1 Implementation details

Our focus is on the behaviors of extremely deep networks, so we use simple architectures following the style of ResNet-110 [6]. The network inputs are 32*32 images. The first stem layer is a convolution-bn block. Then 4 stages are followed, each stage include blocks, the number of channels of all stages are set to 32. The first stage don’t down-sample, the other three stages down-sample by maxpool operations. The network ends with a global average pooling, a 10-way or 100-way fully-connected layer, and softmax. The blocks can be the bottleneck block in ResNet or xception block. The connections among blocks are connected according to the architectures of ResNetX or ResNet.

We implement ResNetX using the Pytorch framework, and evaluate it using the fastai library. We use

Learner class and its fit_one_cycle function in fastai library to train both ResNetX and RetNet. The Adam optimization method and the ”1cycle” learning rate policy [21]

are used. Momentum of Adam are set to [0.95, 0.85], weight decay is set to 0.01, min-batch size is set to 128, learning rate is set to 0.02, for all situations. To save limited computing resources, we run 3 times, each time 5 epochs, for each combination of parameters. The median accuracy of 3 runs are reported to reduce the impacts of random variations. Obviously, we can not output the state-of-the-art results, our goal is to evaluate the relative performance improvements of ResNetX relative to ResNet.

4.2 DataSets

The two CIFAR datasets consist of colored natural images with 32*32 pixels. CIFAR-10 consists of images drawn from 10 and CIFAR-100 from 100 classes. The training and test sets contain 50,000 and 10,000 images respectively. We follow the simple data augmentation in [9]

for training: 4 pixels are padded on each side, and a 32*32 crop is randomly sampled from the padded image or its horizontal flip. For preprocessing, we normalize the data using the channel means and standard deviations.

4.3 Results

For CIFAR-10, blocks per stage are set to {24, 32, 40, 64} respectively, fold depth of ResNetX are set to {3, 4, 5} respectively. Tab. 2 and Tab. 3 show the results when the basic block is implemented by xception block and bottleneck block, respectively. The results show that ResNetX increase the classification accuracy by 5.42% if the basic block is xception block, increase the classification accuracy by 2.33% if the basic block is bottleneck block.

For CIFAR-100, blocks per stage are set to {24, 32, 40} respectively, fold depth of ResNetX are set to {3, 4, 5} respectively. Tab. 4 and Tab. 5 show the results when the basic block is implemented by xception block and bottleneck block, respectively. The results show that ResNetX increase the classification accuracy by 6.59% if the basic block is xception block, increase the classification accuracy by 2.67% if the basic block is bottleneck block.

Model Blocks per stage Accuracy (%)
ResNet 24 79.69
32 79.93
40 79.80
64 79.72
ResNetX () 24 82.98
32 83.85
40 83.94
64 84.73
ResNetX () 24 83.86
32 84.10
40 84.12
64 84.97
ResNetX () 24 83.56
32 84.23
40 84.39
64 85.35
Table 2: Accuracy of ResNet and ResNetX for CIFAR-10 where basic block implemented by xception block. Bold values are best result for each case.
Model Blocks per stage Accuracy (%)
ResNet 24 85.62
32 85.53
40 85.74
64 85.39
ResNetX () 24 86.03
32 86.83
40 87.40
64 88.07
ResNetX () 24 85.92
32 86.57
40 86.90
64 87.64
ResNetX () 24 85.86
32 86.28
40 86.45
64 87.16
Table 3: Accuracy of ResNet and ResNetX for CIFAR-10 where basic block implemented by bottleneck block. Bold values are best result for each case.
Model Blocks per stage Accuracy (%)
ResNet 24 46.72
32 47.15
40 47.10
ResNetX () 24 51.76
32 52.09
40 52.91
ResNetX () 24 52.50
32 53.13
40 53.74
ResNetX () 24 52.14
32 52.90
40 53.52
Table 4: Accuracy of ResNet and ResNetX for CIFAR-100 where basic block implemented by xception block. Bold values are best result for each case.
Model Blocks per stage Accuracy (%)
ResNet 24 54.87
32 55.27
40 55.85
ResNetX () 24 56.91
32 57.83
40 58.30
ResNetX () 24 56.18
32 58.13
40 58.52
ResNetX () 24 55.17
32 57.55
40 58.10
Table 5: Accuracy of ResNet and ResNetX for CIFAR-10 where basic block implemented by bottleneck block. Bold values are best result for each case.

5 Conclusion and future work

We present a simple yet efficient architecture, namely ResNetX. ResNetX have two structural features when being mapped to directed acyclic graphs: First is a higher degree of disorder compared with ResNet, which let ResNetX to explore a larger number of feature maps with different sizes of receptive fields. Second is a larger proportion of shorter paths compared with ResNet, which improve the directly flow of information through the entire network. The ResNetX exposes a new dimension, namely ”fold depth”, in addition to existing dimensions of depth, width, and cardinality. Our ResNetX architecture is a natural extension to ResNet, and can be integrated with existing state-of-the-art methods with little effort. Image classification results on CIFAR-10 and CIFAR-100 benchmarks suggested that our new network architecture performs better than ResNet.

Although preliminary results suggest the effectiveness of our model, we recognize that our experiments are not enough, and the state-of-the-art results still did not be outputted. We will explore more values of parameters and more datasets if we have feasible conditions. The source code of ResNetX can be accessed at https://github.com/keepsimpler/zero, and we also encourage people to conduct more experiments to evaluate its performance.

References

  • [1] M. Caudill (1987-12) Neural Networks Primer, Part I. AI Expert 2 (12), pp. 46–52. External Links: ISSN 0888-3785, Link Cited by: §1.
  • [2] C. Chen, Q. Fan, N. Mallinar, T. Sercu, and R. Feris (2019-07) Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition. arXiv:1807.03848 [cs]. Note: arXiv: 1807.03848Comment: git repo: https://github.com/IBM/BigLittleNet External Links: Link Cited by: §2.4.
  • [3] Y. Chen, H. Fan, B. Xu, Z. Yan, Y. Kalantidis, M. Rohrbach, S. Yan, and J. Feng (2019-08) Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. arXiv:1904.05049 [cs]. Note: arXiv: 1904.05049Comment: Accepted to ICCV 2019 External Links: Link Cited by: §2.4.
  • [4] F. Chollet (2016-10)

    Xception: Deep Learning with Depthwise Separable Convolutions

    .
    arXiv:1610.02357 [cs]. Note: arXiv: 1610.02357 External Links: Link Cited by: §2.1, §4.
  • [5] V. Domínguez-García, S. Johnson, and M. A. Muñoz (2016-06) Intervality and coherence in complex networks. Chaos: An Interdisciplinary Journal of Nonlinear Science 26 (6), pp. 065308 (en). Note: arXiv: 1603.03767 External Links: ISSN 1054-1500, 1089-7682, Link, Document Cited by: §2.3.
  • [6] K. He, X. Zhang, S. Ren, and J. Sun (2015-12) Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs]. Note: arXiv: 1512.03385Comment: Tech report External Links: Link Cited by: §1, §2.1, §2.4, §3, §4.1.
  • [7] K. He, X. Zhang, S. Ren, and J. Sun (2016-10) Identity Mappings in Deep Residual Networks. In Computer Vision – ECCV 2016, Lecture Notes in Computer Science, pp. 630–645 (en). External Links: ISBN 978-3-319-46492-3 978-3-319-46493-0, Link, Document Cited by: §1, §2.1, §2.4, §3.
  • [8] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017-04) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 [cs]. Note: arXiv: 1704.04861 External Links: Link Cited by: §2.1.
  • [9] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger (2016-08) Densely Connected Convolutional Networks. arXiv:1608.06993 [cs]. Note: arXiv: 1608.06993Comment: CVPR 2017 External Links: Link Cited by: §1, §2.1, §2.4, §4.2.
  • [10] S. Ioffe and C. Szegedy (2015-02) Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs]. Note: arXiv: 1502.03167 External Links: Link Cited by: §3.
  • [11] S. Johnson, V. Domínguez-García, L. Donetti, and M. A. Muñoz (2014-04) Trophic coherence determines food-web stability. arXiv:1404.7728 [cond-mat, q-bio]. Note: arXiv: 1404.7728Comment: Manuscript plus Supporting Information. To appear in PNAS External Links: Link, Document Cited by: §2.3, §2.3.
  • [12] B. Karrer and M. E. J. Newman (2009-10) Random graph models for directed acyclic networks. Physical Review E 80 (4). Note: arXiv: 0907.4346Comment: 14 pages, 5 figures External Links: ISSN 1539-3755, 1550-2376, Link, Document Cited by: §2.3.
  • [13] J. Klaise and S. Johnson (2016-06)

    From neurons to epidemics: How trophic coherence affects spreading processes

    .
    Chaos: An Interdisciplinary Journal of Nonlinear Science 26 (6), pp. 065310. Note: arXiv: 1603.00670 External Links: ISSN 1054-1500, 1089-7682, Link, Document Cited by: §2.3.
  • [14] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner (1998-11) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. External Links: ISSN 0018-9219, 1558-2256, Document Cited by: §3.
  • [15] L. Li and A. Talwalkar (2019-02) Random Search and Reproducibility for Neural Architecture Search. arXiv:1902.07638 [cs, stat] (en). Note: arXiv: 1902.07638 External Links: Link Cited by: §1.
  • [16] M. Newman (2010) Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA. External Links: ISBN 0-19-920665-1 978-0-19-920665-0 Cited by: §1.
  • [17] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean (2018-02) Efficient Neural Architecture Search via Parameter Sharing. arXiv:1802.03268 [cs, stat]. Note: arXiv: 1802.03268 External Links: Link Cited by: §1.
  • [18] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen (2018-01) MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv:1801.04381 [cs]. Note: arXiv: 1801.04381 External Links: Link Cited by: §2.1.
  • [19] C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann (2019-02) Evaluating the Search Phase of Neural Architecture Search. arXiv:1902.08142 [cs, stat] (en). Note: arXiv: 1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages External Links: Link Cited by: §1.
  • [20] K. Simonyan and A. Zisserman (2014-09) Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556 [cs]. Note: arXiv: 1409.1556 External Links: Link Cited by: §2.1.
  • [21] L. N. Smith and N. Topin (2017-08) Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. arXiv:1708.07120 [cs, stat]. Note: arXiv: 1708.07120Comment: This paper was significantly revised to show super-convergence as a general fast training methodologyhttps://github.com/lnsmith54/super-convergence External Links: Link Cited by: §4.1.
  • [22] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi (2016-02) Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. (en). External Links: Link Cited by: §2.1, §2.4.
  • [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2014-09) Going Deeper with Convolutions. arXiv:1409.4842 [cs]. Note: arXiv: 1409.4842 External Links: Link Cited by: §2.4.
  • [24] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2015-12) Rethinking the Inception Architecture for Computer Vision. arXiv:1512.00567 [cs]. Note: arXiv: 1512.00567 External Links: Link Cited by: §2.4.
  • [25] M. Tan and Q. V. Le (2019-05) EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 [cs, stat] (en). Note: arXiv: 1905.11946Comment: Published in ICML 2019 External Links: Link Cited by: §2.1.
  • [26] A. Testolin, M. Piccolini, and S. Suweis (2018-09) Deep learning systems as complex networks. (en). External Links: Link Cited by: §1.
  • [27] A. Veit, M. Wilber, and S. Belongie (2016) Residual Networks Behave Like Ensembles of Relatively Shallow Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, USA, pp. 550–558. External Links: ISBN 978-1-5108-3881-9, Link Cited by: §2.2, §3.2.
  • [28] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017-04) Aggregated Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs] (en). Note: arXiv: 1611.05431Comment: Accepted to CVPR 2017. Code and models: https://github.com/facebookresearch/ResNeXt External Links: Link Cited by: §2.1.
  • [29] S. Xie, A. Kirillov, R. Girshick, and K. He (2019-04) Exploring Randomly Wired Neural Networks for Image Recognition. (en). External Links: Link Cited by: §1.
  • [30] S. Zagoruyko and N. Komodakis (2016-05) Wide Residual Networks. (en). External Links: Link Cited by: §2.1.
  • [31] B. Zoph and Q. V. Le (2016-11)

    Neural Architecture Search with Reinforcement Learning

    .
    arXiv:1611.01578 [cs]. Note: arXiv: 1611.01578 External Links: Link Cited by: §1.
  • [32] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2017-07) Learning Transferable Architectures for Scalable Image Recognition. arXiv:1707.07012 [cs, stat]. Note: arXiv: 1707.07012 External Links: Link Cited by: §1.