1 Introduction
An artificial neural network is a computing system made up of many simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs [1]
. How the processing elements are connected is believed to be crucial for the performance of an artificial neural network. Recent advances in computer vision models also partially confirmed such hyperthesis, e.g., the effectiveness of ResNet
[6, 7] and DenseNet [9] and the models in neural architecture search [15, 17, 19, 31, 32] is largely due to how they are connected.In spite of the architecture of neural networks is critically important, there is still no consistent way to model it till now. This makes it impossible to theoretically measure the impact of network structure on their performance, and also makes the design of network architecture is based on intuition and more like try and error. Even if recent models generated by automatically searching in large architecture space are also a kind of try and error method.
On the other hand, the theory of complex networks has been used to model networked systems for decades [16]. If we consider neural networks as networked systems, we can use the theory of complex networks to model neural networks and to characterize the impact of network structure on their performance. Recently, Testolin et al. [26]
studied deep belief networks using techniques in the field of complex networks; Xie et al.
[29] used three classical random graph models which are theoretical basics of complex networks to generate randomly connected neural network structures.We here first provide a natural yet efficient extension to original residual networks. By mapping the newly designed convolutional neural network architectures to directed acyclic graphs, we show that they have two structural features in terms of complex networks, that bring the high performance of the model. The first structural feature is they have a less average length of paths and thus a larger number of effective paths, that lead to the more direct flowing of information throughout the entire network. The second structural feature is that those directed acyclic graphs have a high degree of disorder, which means nodes tend to connect to other nodes with different levels, that further improve the multiscale representation of the model.
2 Related work
2.1 Network architectures
The exploration of network structures has been a part of neural network research since their initial discovery. Recently, the structure of convolutional neural networks has been explored from their depth [20, 6, 7, 9], width [30], cardinality [28], etc. The building blocks of network architectures also extended from residual blocks [6, 7, 30, 28] to many variants of efficient blocks [4, 25, 8, 18, 18, 22], such as depthwise separable convolutional blocks, etc.
2.2 Effective paths in neural networks
Veit et al. [27] interpreted residual networks as a collection of many paths of differing lengths. The gradient magnitude of a path decreases exponentially with the number of blocks it went through in the backward pass. The total gradient magnitude contributed by paths of each length can be calculated by multiplying the number of paths with that length, and the expected gradient magnitude of the paths with the same length. Thus most of the total gradient magnitude is contributed by paths of shorter length even though they constitute only a tiny part of all paths through the network. These shorter paths are called effective paths [27]. The larger the number of effective paths, the better performance, with other conditions unchanged.
2.3 Degree of order of DAGs: trophic coherence
Directed Acyclic Graphs (DAGs) is a representation of partially ordered sets [12]. The extent to which the nodes of a DAG are organized in levels can be measured by trophic coherence, a parameter that is originally defined in food webs and then shown to be closely related to many structural and dynamical aspects of complex systems [11, 5, 13].
For a directed acyclic graph given by adjacency matrix , with elements if there is a directed edge from node to node , and if not. The in and outdegrees of node are and , respectively. The first node () can never have ingoing edges, thus . Similarly, the last node () can never have outgoing edges, thus .
The trophic level of nodes is defined as
(1) 
if , or if . In other words, the trophic level of the first node is by convention, while other nodes are assigned the mean trophic level of their inneighbors, plus one. Thus, for any DAG, the trophic level of each node can be easily obtained by solving the linear system of Eq. 1. Johnson et al. [11] characterize each edge in an network with a trophic distance: . They then consider the distribution of trophic distances over the network, . The homogeneity of
is called trophic coherence: the more similar the trophic distances of all the edges, the more coherent the network. As a measure of coherence, one can simply use the standard deviation of
, which is referred to as an incoherence parameter: .2.4 Multiscale feature representation
The multiscale representation ability of convolutional neural networks is achieved and improved by using convolutional layers with different kernel sizes (e.g., InceptionNets [22, 23, 24]), by utilizing features with different resolutions [2, 3], and by combining features with different sizes of receptive field [6, 7, 9]. We argue that the degree of disorder of convolutional neural network structures improves their multiscale representation ability.
3 ResNetX
Consider a single image that is passed through a convolutional network. The network comprises
layers, each of which implements a nonlinear transformation
, where indexes the layer.can be a composite function of operations such as Batch Normalization (BN)
[10], rectified linear units (ReLU), Pooling
[14], or Convolution (Conv). We denote the output of the layer as .ResNet [6, 7] add a skipconnection that bypasses the nonlinear transformations with an identity function:
(2) 
An advantage of ResNet is that the gradient can flow directly through the identity function (dashed lines in Fig. 1a) from later layers to the earlier layers.
3.1 ResNetX design
We provide a natural yet efficient extension to ResNet. Our intuition is simple, we fold the backbone chain (all the nonlinear transforms) of ResNet, in order for the direct chain (all the identity functions) to traceback a larger number of previous images with different sizes of receptive fields. Thus, we introduce a new parameter to represent the fold depth. The deeper the fold, the larger the number of previous images of different sizes of receptive fields the model can traceback. When , our model is just the original ResNet. In order to distinguish our model with ResNet, we name it ResNetX, where the character ”X” at the end is a symbol of the new parameter, i.e. the fold depth. Fig. 1 illustrated the architectures of original ResNet(1a), our ResNetX model when (1b) and (1c), respectively.
Compared with ResNet, our architectures traceback a larger number of previous images that have different sizes of receptive fields, thus promote the fusion of a larger number of images with different receptive fields and improve the multiscale representation ability. Moreover, our architectures increases the number of ”direct” chains from one (dashed line in Fig. 1a) in ResNet to two (dashed lines in Fig. 1b), three (dashed lines in Fig. 1c) and more, which decrease the average length of paths through the entire network, increase the number of effective paths, and thus promote the directly propagation of information along with the ”direct” chains. We argue that these two features lead to the effectiveness of our model.
Our model can be formally expressed by the following steps and equations. First, the output of the current layer , , equal to the summation of the nonlinear transformation of the output of the previous layer and the output of layer , :
(3) 
The layer difference is determined by the current layer index and the fold depth . When the current layer index is less than the fold depth, we set like in ResNet, to accumulate enough outputs that could be traced by the later layers, i.e.
(4) 
Otherwise, we first divide the current layer index by a number to get the remainder
(5) 
After that, if the remainder is between , the layer difference equal to , i.e.
(6) 
Otherwise, we further compute the second remainder
(7) 
and calculate the layer difference as .
In summary, the layer difference can be computed by the following equation:
(8) 
3.2 Comparison between ResNetX and ResNet
In order to compare the architectures of ResNetX and ResNet, we first need to map both of them to directed acyclic graphs. The mapping from the architectures of neural networks to general graphs is flexible. We here intentionally chose a simple mapping, i.e. nodes in graphs represent nonlinear transformations among data, while edges in graphs represent data flows which send data from one node to another node. Such mapping separates the impact of network structure on performance from the impact of node operations on performance since all the weights in neural networks are reflected in nodes of graphs.
Under the above mapping rule, the architecture of ReNet is mapped to a complete directed acyclic graph (Fig. 4
). For a complete directed acyclic graph, the distribution of all path lengths from the first node to the last node follows a Binomial distribution, which conforms to results in
[27]. A complete directed acyclic graph also has a high value of incoherence parameter , which indicates a high degree of disorder.The architectures of ResNetX are mapped to different directed acyclic graphs according to different values of the fold depth . Fig. 4 and 4 are two examples for and respectively.
We compared the distribution of path lengths of ResNet and ResNetX in Fig. 5. As shown in Fig. 5, the proportion of shorter paths of ResNetX are all larger than that of ResNetX, and increase with the fold depth . We also computed the values of incoherence parameter of ResNetX when and compare them with the value of incoherence parameter of ResNet. As shown in Tab. 1, all the values of incoherence parameter of ResNetX are larger than that of ResNetX, and increase with the fold depth .
The comparison of path lengths and incoherence parameter between ResNetX and ResNet show that ResNetX have a larger proportion of shorter paths and a higher degree of disorder than ResNet, and we argue that two features bring better performance of ResNetX.
Model  Incoherence parameter () 

ResNet  0.8523 
ResNetX ()  0.8904 
ResNetX ()  0.8950 
ResNetX ()  0.9124 
4 Experiments
Limited to experimental conditions, we don’t have computing resources to train largescale data sets. We have to plan carefully to save very limited computing resources. Thus, we only consider parameters that are critical for the comparison between ResNetX and ResNet, and keep all other parameters constant. Since ResNetX only changes the connecting way of residual connections among earlier and later layers in ResNet, and change nothing inside the layers, it should mainly change the influence of network depth on performance and is orthogonal to other aspects of architecture. Therefore, we keep all other parameters constant, and only change network depth to evaluate its effect on performance.
We evaluate ResNetX on classification task on CIFAR10, CIFAR100 datasets and compare with ResNet. We choose the basic building block of ResNet and the depthwise separable convolutional block in xception network [4] as the building block of ResNetX, respectively.
4.1 Implementation details
Our focus is on the behaviors of extremely deep networks, so we use simple architectures following the style of ResNet110 [6]. The network inputs are 32*32 images. The first stem layer is a convolutionbn block. Then 4 stages are followed, each stage include blocks, the number of channels of all stages are set to 32. The first stage don’t downsample, the other three stages downsample by maxpool operations. The network ends with a global average pooling, a 10way or 100way fullyconnected layer, and softmax. The blocks can be the bottleneck block in ResNet or xception block. The connections among blocks are connected according to the architectures of ResNetX or ResNet.
We implement ResNetX using the Pytorch framework, and evaluate it using the fastai library. We use
Learner class and its fit_one_cycle function in fastai library to train both ResNetX and RetNet. The Adam optimization method and the ”1cycle” learning rate policy [21]are used. Momentum of Adam are set to [0.95, 0.85], weight decay is set to 0.01, minbatch size is set to 128, learning rate is set to 0.02, for all situations. To save limited computing resources, we run 3 times, each time 5 epochs, for each combination of parameters. The median accuracy of 3 runs are reported to reduce the impacts of random variations. Obviously, we can not output the stateoftheart results, our goal is to evaluate the relative performance improvements of ResNetX relative to ResNet.
4.2 DataSets
The two CIFAR datasets consist of colored natural images with 32*32 pixels. CIFAR10 consists of images drawn from 10 and CIFAR100 from 100 classes. The training and test sets contain 50,000 and 10,000 images respectively. We follow the simple data augmentation in [9]
for training: 4 pixels are padded on each side, and a 32*32 crop is randomly sampled from the padded image or its horizontal flip. For preprocessing, we normalize the data using the channel means and standard deviations.
4.3 Results
For CIFAR10, blocks per stage are set to {24, 32, 40, 64} respectively, fold depth of ResNetX are set to {3, 4, 5} respectively. Tab. 2 and Tab. 3 show the results when the basic block is implemented by xception block and bottleneck block, respectively. The results show that ResNetX increase the classification accuracy by 5.42% if the basic block is xception block, increase the classification accuracy by 2.33% if the basic block is bottleneck block.
For CIFAR100, blocks per stage are set to {24, 32, 40} respectively, fold depth of ResNetX are set to {3, 4, 5} respectively. Tab. 4 and Tab. 5 show the results when the basic block is implemented by xception block and bottleneck block, respectively. The results show that ResNetX increase the classification accuracy by 6.59% if the basic block is xception block, increase the classification accuracy by 2.67% if the basic block is bottleneck block.
Model  Blocks per stage  Accuracy (%) 

ResNet  24  79.69 
32  79.93  
40  79.80  
64  79.72  
ResNetX ()  24  82.98 
32  83.85  
40  83.94  
64  84.73  
ResNetX ()  24  83.86 
32  84.10  
40  84.12  
64  84.97  
ResNetX ()  24  83.56 
32  84.23  
40  84.39  
64  85.35 
Model  Blocks per stage  Accuracy (%) 

ResNet  24  85.62 
32  85.53  
40  85.74  
64  85.39  
ResNetX ()  24  86.03 
32  86.83  
40  87.40  
64  88.07  
ResNetX ()  24  85.92 
32  86.57  
40  86.90  
64  87.64  
ResNetX ()  24  85.86 
32  86.28  
40  86.45  
64  87.16 
Model  Blocks per stage  Accuracy (%) 

ResNet  24  46.72 
32  47.15  
40  47.10  
ResNetX ()  24  51.76 
32  52.09  
40  52.91  
ResNetX ()  24  52.50 
32  53.13  
40  53.74  
ResNetX ()  24  52.14 
32  52.90  
40  53.52 
Model  Blocks per stage  Accuracy (%) 

ResNet  24  54.87 
32  55.27  
40  55.85  
ResNetX ()  24  56.91 
32  57.83  
40  58.30  
ResNetX ()  24  56.18 
32  58.13  
40  58.52  
ResNetX ()  24  55.17 
32  57.55  
40  58.10 
5 Conclusion and future work
We present a simple yet efficient architecture, namely ResNetX. ResNetX have two structural features when being mapped to directed acyclic graphs: First is a higher degree of disorder compared with ResNet, which let ResNetX to explore a larger number of feature maps with different sizes of receptive fields. Second is a larger proportion of shorter paths compared with ResNet, which improve the directly flow of information through the entire network. The ResNetX exposes a new dimension, namely ”fold depth”, in addition to existing dimensions of depth, width, and cardinality. Our ResNetX architecture is a natural extension to ResNet, and can be integrated with existing stateoftheart methods with little effort. Image classification results on CIFAR10 and CIFAR100 benchmarks suggested that our new network architecture performs better than ResNet.
Although preliminary results suggest the effectiveness of our model, we recognize that our experiments are not enough, and the stateoftheart results still did not be outputted. We will explore more values of parameters and more datasets if we have feasible conditions. The source code of ResNetX can be accessed at https://github.com/keepsimpler/zero, and we also encourage people to conduct more experiments to evaluate its performance.
References
 [1] (198712) Neural Networks Primer, Part I. AI Expert 2 (12), pp. 46–52. External Links: ISSN 08883785, Link Cited by: §1.
 [2] (201907) BigLittle Net: An Efficient MultiScale Feature Representation for Visual and Speech Recognition. arXiv:1807.03848 [cs]. Note: arXiv: 1807.03848Comment: git repo: https://github.com/IBM/BigLittleNet External Links: Link Cited by: §2.4.
 [3] (201908) Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. arXiv:1904.05049 [cs]. Note: arXiv: 1904.05049Comment: Accepted to ICCV 2019 External Links: Link Cited by: §2.4.

[4]
(201610)
Xception: Deep Learning with Depthwise Separable Convolutions
. arXiv:1610.02357 [cs]. Note: arXiv: 1610.02357 External Links: Link Cited by: §2.1, §4.  [5] (201606) Intervality and coherence in complex networks. Chaos: An Interdisciplinary Journal of Nonlinear Science 26 (6), pp. 065308 (en). Note: arXiv: 1603.03767 External Links: ISSN 10541500, 10897682, Link, Document Cited by: §2.3.
 [6] (201512) Deep Residual Learning for Image Recognition. arXiv:1512.03385 [cs]. Note: arXiv: 1512.03385Comment: Tech report External Links: Link Cited by: §1, §2.1, §2.4, §3, §4.1.
 [7] (201610) Identity Mappings in Deep Residual Networks. In Computer Vision – ECCV 2016, Lecture Notes in Computer Science, pp. 630–645 (en). External Links: ISBN 9783319464923 9783319464930, Link, Document Cited by: §1, §2.1, §2.4, §3.
 [8] (201704) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861 [cs]. Note: arXiv: 1704.04861 External Links: Link Cited by: §2.1.
 [9] (201608) Densely Connected Convolutional Networks. arXiv:1608.06993 [cs]. Note: arXiv: 1608.06993Comment: CVPR 2017 External Links: Link Cited by: §1, §2.1, §2.4, §4.2.
 [10] (201502) Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 [cs]. Note: arXiv: 1502.03167 External Links: Link Cited by: §3.
 [11] (201404) Trophic coherence determines foodweb stability. arXiv:1404.7728 [condmat, qbio]. Note: arXiv: 1404.7728Comment: Manuscript plus Supporting Information. To appear in PNAS External Links: Link, Document Cited by: §2.3, §2.3.
 [12] (200910) Random graph models for directed acyclic networks. Physical Review E 80 (4). Note: arXiv: 0907.4346Comment: 14 pages, 5 figures External Links: ISSN 15393755, 15502376, Link, Document Cited by: §2.3.

[13]
(201606)
From neurons to epidemics: How trophic coherence affects spreading processes
. Chaos: An Interdisciplinary Journal of Nonlinear Science 26 (6), pp. 065310. Note: arXiv: 1603.00670 External Links: ISSN 10541500, 10897682, Link, Document Cited by: §2.3.  [14] (199811) Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. External Links: ISSN 00189219, 15582256, Document Cited by: §3.
 [15] (201902) Random Search and Reproducibility for Neural Architecture Search. arXiv:1902.07638 [cs, stat] (en). Note: arXiv: 1902.07638 External Links: Link Cited by: §1.
 [16] (2010) Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA. External Links: ISBN 0199206651 9780199206650 Cited by: §1.
 [17] (201802) Efficient Neural Architecture Search via Parameter Sharing. arXiv:1802.03268 [cs, stat]. Note: arXiv: 1802.03268 External Links: Link Cited by: §1.
 [18] (201801) MobileNetV2: Inverted Residuals and Linear Bottlenecks. arXiv:1801.04381 [cs]. Note: arXiv: 1801.04381 External Links: Link Cited by: §2.1.
 [19] (201902) Evaluating the Search Phase of Neural Architecture Search. arXiv:1902.08142 [cs, stat] (en). Note: arXiv: 1902.08142Comment: We find that random policy in NAS works amazingly well and propose an evaluation framework to have a fair comparison. 8 pages External Links: Link Cited by: §1.
 [20] (201409) Very Deep Convolutional Networks for LargeScale Image Recognition. arXiv:1409.1556 [cs]. Note: arXiv: 1409.1556 External Links: Link Cited by: §2.1.
 [21] (201708) SuperConvergence: Very Fast Training of Neural Networks Using Large Learning Rates. arXiv:1708.07120 [cs, stat]. Note: arXiv: 1708.07120Comment: This paper was significantly revised to show superconvergence as a general fast training methodologyhttps://github.com/lnsmith54/superconvergence External Links: Link Cited by: §4.1.
 [22] (201602) Inceptionv4, InceptionResNet and the Impact of Residual Connections on Learning. (en). External Links: Link Cited by: §2.1, §2.4.
 [23] (201409) Going Deeper with Convolutions. arXiv:1409.4842 [cs]. Note: arXiv: 1409.4842 External Links: Link Cited by: §2.4.
 [24] (201512) Rethinking the Inception Architecture for Computer Vision. arXiv:1512.00567 [cs]. Note: arXiv: 1512.00567 External Links: Link Cited by: §2.4.
 [25] (201905) EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 [cs, stat] (en). Note: arXiv: 1905.11946Comment: Published in ICML 2019 External Links: Link Cited by: §2.1.
 [26] (201809) Deep learning systems as complex networks. (en). External Links: Link Cited by: §1.
 [27] (2016) Residual Networks Behave Like Ensembles of Relatively Shallow Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS’16, USA, pp. 550–558. External Links: ISBN 9781510838819, Link Cited by: §2.2, §3.2.
 [28] (201704) Aggregated Residual Transformations for Deep Neural Networks. arXiv:1611.05431 [cs] (en). Note: arXiv: 1611.05431Comment: Accepted to CVPR 2017. Code and models: https://github.com/facebookresearch/ResNeXt External Links: Link Cited by: §2.1.
 [29] (201904) Exploring Randomly Wired Neural Networks for Image Recognition. (en). External Links: Link Cited by: §1.
 [30] (201605) Wide Residual Networks. (en). External Links: Link Cited by: §2.1.

[31]
(201611)
Neural Architecture Search with Reinforcement Learning
. arXiv:1611.01578 [cs]. Note: arXiv: 1611.01578 External Links: Link Cited by: §1.  [32] (201707) Learning Transferable Architectures for Scalable Image Recognition. arXiv:1707.07012 [cs, stat]. Note: arXiv: 1707.07012 External Links: Link Cited by: §1.
Comments
There are no comments yet.