Deep neural networks (DNNs) have dominated the field of computer vision because of superior performance in all kinds of tasks. It is a tendency that the network architecture is becoming deeper and more complex[27, 26, 12, 35, 16] to yield higher accuracy. However, the great computing expense of deeper networks contradicts the demands of many resource-constrained applications, which prefer lightweight networks [15, 25, 38, 21] to meet limited computation or storage requirement.
An elegant solution is to make use of dynamic inference mechanism [31, 29, 34, 17, 5, 7, 8, 28], reconfiguring the inference path according to the input sample adaptively to meet a better accuracy-efficiency trade-off. Prevalent dynamic inference techniques are mostly layer-wise methods [31, 29, 34, 7, 28], as shown in Fig. 0(a). These methods are usually adopted to determine the execution status of a whole layer at runtime based on a specified mechanism.
All these existing dynamic inference methods only alter the depth of the network. The drawbacks are obvious. First, it is impractical to drop the whole layer/block since some channels of a skipped layer may be useful. Second, the redundant information between different channels may still exist in the remaining layers. A recent study  visualizes the hidden features of CNN models and shows the performance contribution from different channels and different layers. There exists different emphasis on extracting feature among different channels and layers.
In this work, we attempt to improve the conventional dynamic inference scheme in terms of both network width and depth and find an effective forward mechanism for different inputs at runtime from a new perspective of block design. We propose Dynamic Multi-path Neural Network (DMNN), a novel dynamic inference method that provides various inference path selections. Fig. 0(b) gives an overview of our approach. Different from conventional methods, it is expected that each channel has its gate to predict whether to execute or not. The primary technical challenge of DMNN is how to design an efficient and effective controller.
Challenge of efficiency. Since DMNN is aimed to conduct channel-wise dynamic evaluation, it is ideal for controlling the execution of each channel of the network at runtime. However, this would lead to a significant increase in computational complexity. Moreover, as controllers are used at each layer/block of the network, they are desirable for lightweight design and generate only a small amount of computational cost.
Challenge of effectiveness. The gate control mechanism is similar to SENet , which adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels. However, SENet makes use of soft-weighted sum, while DMNN adopts the hard-max mechanism for faster inference while maintaining or boosting accuracy. In order to obtain a more reasonable inference path, it would be better if we take both previous state information and object category into consideration. Besides, the resource-constrained loss is also required to make the computational complexity controllable.
To tackle the challenges, considering that different channels have different representation characteristics, we split the original block of the network into several sub-blocks. Thus the proposed method provides more optional inference paths. A gate controller is introduced to decide whether to execute or skip one sub-block for the current input, which only generates minor additional computational cost during inference. Each block has its controller to control the status of every sub-blocks. We also carefully design the gate controller to take both previous state information and object category into consideration. Moreover, we introduce resource-constrained loss which integrates FLOPs constraint into the optimization process to make the computational complexity controllable. The proposed DMNN is easy to implement and can be incorporated into most modern network architectures.
The contributions are summarized as follows:
We propose a novel dynamic inference method called Dynamic Multi-path Neural Network, which can provide more path selection choices in terms of network width and depth during inference.
We carefully design a gate module controller, which takes into account both previous state and object category information. The resource-constrained loss is also introduced to control the computational complexity of the target network.
Experimental results demonstrate the superiority of our method on both efficiency and overall classification accuracy. To be specific, DMNN-101 significantly outperforms ResNet-101 with an encouraging 45.1% FLOPs reduction, and DMNN-50 performs comparable results to ResNet-101 with 42.1% fewer parameters.
2 Related Work
Adaptive Computation. Adaptive computation aims to reduce overall inference time by changing network topology based on different samples while maintaining or even boosting accuracy. This idea has been adopted in early cascade detectors [6, 30], relying on extra prediction modules or handcrafted control strategies. Learning based layer-wise dynamic inference schemes are widely investigated in the field of computer vision. Early prediction models like BranchyNet and Adaptive Computation Time  adopt branches or halt units to decide whether the model could stop early. Some works use gate mechanism to determine the execution of a specific block. Wang et al. 
propose SkipNet which uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. A hybrid learning algorithm that combines supervised learning and reinforcement learning is used to address the challenges of non-differentiable skipping decisions. Wu et al. propose BlockDrop and also make use of a reinforcement learning setting for the reward of utilizing a minimal number of blocks while preserving recognition accuracy. ConvNet-AIG is proposed in , which utilizes the Gumbel-Max trick  to optimize the gate module. However, the block-wise method can only alter the depth of the network, which could be too rough as some channels of an abandoned block may be useful.
On the other hand, the channel-wise method can manually adjust the number of active channels of a specific model. However, as far as we know, only  is similar to such a method. The proposed Slimmable Neural Networks can adjust its width on the fly according to the on-device benchmarks and resource constraints. Strictly speaking, it is not a dynamic process as the procedure of choosing the active channels is finished before inference. Moreover, the pre-defined width multipliers negatively affect the flexibility of the dynamic inference mechanism. Our work is close to . However, we attempt to combine the merits of both the above two methods and propose a novel dynamic inference method which can provide more path selection choices in terms of network width and depth.
Model Compression. The great computing expense of deeper networks contradicts the demands of many resource-constrained applications, such as mobile platforms, therefore, reducing storage and inference time also plays an important role in deploying top-performing deep neural networks. Lots of techniques are proposed to attack this problem, such as pruning [13, 22, 33], distillation [14, 23], quantization [11, 32, 19], low-rank factorization , compression with structured matrices 
and network binarization. However, these works are usually applied after training the initial networks and generally used as post-processing, while DMNN could be trained end-to-end without well-designed training rules.
In this section, we introduce the proposed dynamic multi-path neural network (DMNN) in detail, including the subdivision of the block, the architecture of the controller and the optimization approach.
3.1 Block Subdivision
It is ideal for controlling the execution of each channel of the network at runtime. However, this would lead to a significant increase in computational complexity. In this work, we divide the origin block of the network into several sub-blocks, and each sub-block has its switch to decide whether to execute or not, resulting to a dynamic inference path for different samples. We interpret optimizing the network structure as executing or skipping of each sub-block during the inference stage.
A key issue is how to split one block into sub-blocks. The guiding principle is that the parameters of the new block must be consistent with or approximate to the original block for fair comparison. Fig. 2 shows the subdivision of blocks of MobileNetV2 and ResNet.
For the block of MobileNetV2, we divide the origin block into sub-blocks, the expansion ratio of each sub-block is set to . Thus the sum of every sub-block’s computation and parameters are the same with the original block since it only consists of pixel-wise convs and depth-wise convs, more detail can be seen in Fig 1(a).
While for ResNet, it is not that straightforward. As shown in Fig. 1(b)
, suppose the shape of the input tensor is, the output channels of each conv operation are . The parameter of the original block is
The original block is then split into sub-blocks. The output channels of each sub-block are . Then the parameter becomes
If we simply set the number of channels of each sub-block to of the origin blocks, i.e. , Eqn. 2 can be rewritten as follow:
which is not equal to Eqn. 1. Thus, to make subsequent extensive studies fair, we make minor modifications to ResNet and design the corresponding DMNN version to make Eqn. 1 Eqn. 2. For saving space, the detailed architecture of DMNN-50 can be referred to supplementary materials.
3.2 The Architecture of Controller
The controller is elaborately designed to predict the status of each sub-block (on/off) with an minimal cost. It is the inference paths optimizer of DMNN. An overview of the dynamic path selection framework is shown in Fig. 3. Given an input image, its forward path is determined by the gate controllers and Fig. 2(a) shows the gate mechanism of DMNN. Suppose we split -th block into sub-blocks, the output of -th block is the combination of the outputs of an identity connection and sub-blocks. Formally,
where is output of -th block, refers to the off/on status which is predicted by the controller. refers to the output of -th sub-block of -th block.
Spatial and previous state information embedding. On the one hand, the control modules make decisions based on the global spatial information, and we achieve this process by applying global average pooling to compress the high dimension features to one dimension along channels. We further use a fully connected layer followed by an activation layer to map the pooling features to low-dimensional space. Specifically, represents the input features of -th block, we calculate the -th channel statistic by
The final embedding feature is
where , ,
is the ReLU function, is the dimension of the hidden layer.
On the other hand, there are some connections between the current controller and the previous controllers. Thus the integration of previous state information is also crucial. We first employ a fully connected layer followed by ReLU function to map the previous state hidden features into the same subspace with . Then we perform an addition operation on the hidden feature and to get the result of the current state. Formally,
where , represents the ReLU function. Bias terms are omitted for simplicity. The status predictions of each sub-block at -th block are made through by using a softmax trick which we will introduce in section 3.3.
Softmax Trick with Gumbel Noise. To decide whether to execute or omit a sub-block is inherently discrete and therefore non-differentiable. In this work, we use softmax trick with gumbel noise to solve this problem, which has been proved to be successful in . Formally, let be the number of sub-blocks and , , is the bias term. is then reshaped to for the final predictions. The activation can be written as follows
where refers to the status of each sub-block of -th block, and is a random noise following the Gumbel distribution, which can increase the stability of the training process of our network.
Supervised learning of controller
. Deep CNNs compute feature hierarchies in each layer and produce feature maps with different depths and resolutions. This can also be considered as a feature extraction process from coarse to fine. The proposed DMNN has a diversity of inference paths, and we hope that different classes would select different paths. However, if the path selection mechanism is trained only by optimizing the classification loss at the last layer, it will be difficult for the controller to learn the category information. To solve this problem, we introduce category loss to each controller to enable all of them to become category-aware. Considering that predicting each class as a different category by the controller is computationally expensive, we cluster samples into fewer categories than original classes. For the ImageNet dataset, we cluster all the 1000 classes samples into 58 big categories with the help of the hierarchical structure of ImageNet provided in . For the CIFAR-100 dataset , it groups the 100 classes into 20 superclasses. We use the 20 superclasses as the big categories directly. Then cross entropy loss is employed to supervise all controllers as shown in Fig. 2(b). Formally, the category loss of -th controller can be written as follow
represents the probability of-th class. if is the ground-truth class and 0 otherwise, indicates the number of categories. It is worth noting that the loss weights of each block’s controller are not always equal since the features of different layers have different semantic information. Deep layers have a stronger semantic information than shallow layers. In DMNN-50, there are four stages composed of 3, 4, 6, 3 stacked blocks respectively, resulting in 16 controllers. The loss weight of the first stage is set to 0.0001, and it will increase by a factor of 10 in the next stages. DMNN-101 follows the same principle. The loss of supervised controller can be represented as follows
where denotes the loss weight of -th controller and denotes the number of blocks. The category information will be removed after training, so it will not generate any extra computational burden during testing.
The controller is desirable for its lightweight characteristic during the optimization of network structure. The dimension of the hidden layer is set to 32 in all experiments. This setting generates only little computational cost and can be omitted compared to the whole computation of the network. If we take DMNN-50 as an example, the total 16 controllers only generate about 0.02% FLOPs of the original ResNet-50.
Resource-constrained Loss. The resource constraint comes from two aspects: the block execution rate and the total FLOPs. The execution rate of each block in a mini-batch is used to constrain the average block activation rate to the target rate . Let denotes the execution rate of -th block within a mini-batch, we define the execution rate as
where is the mini-batch size, is the executed number of -th sub-block within a mini-batch. The total execution rate loss can be written as follow
The other constraint is the total FLOPs. To meet the desired FLOPs, we explicitly introduce the target FLOPs rate to the loss function. In each mini-batch, we compute the actual FLOPs via
where indicates the FLOPs of -th sub-block at -th block of the network. The FLOPs loss can be formulated as
where and represent the full FLOPs and the actual execution FLOPs of the network respectively, and denotes the target FLOPs rate. We set in all experiments since they have strong positive correlation and similar values. Thus, the resource-constrained loss is defined as
The total training loss is
where is the classification loss. In our experiments
. The joint loss would be optimized by mini-batch stochastic gradient descent.
|Model||Top-1 Err. (%)||Params ()||FLOPs ()||FLOPs Ratio (%)|
ResNet-50 (PyTorch Official)
|ResNet-50 + Pruning ||23.91||20.45||2.66||70.0|
|ResNeXt-50  ||23.0||25.4||4.16||105.1|
|ResNeXt-50  ||22.6||25.3||4.20||106.1|
|ConvNet-AIG-50  ||23.82||26.56||3.06||77.3|
|ResNet-101 (PyTorch Official) ||23.63||44.55||7.67||100.0|
|ResNeXt-101  ||21.7||44.46||7.9||103.0|
Our implementations of ResNet-50, ResNet-101, DMNN-50, DMNN-101 use in conv layers just as the PyTorch community does  which is slightly different from the original paper.
In this section, we evaluate the performance of the proposed DMNN on benchmark datasets including ImageNet and CIFAR-100.
4.1 Training Setup
ImageNet. The ImageNet dataset  consists of 1.2 million training images and 50K validation images of 1000 classes. We train networks on the training set and report the top-1 errors on the validation set. We apply standard practice and perform data augmentation with random horizontal flipping and random-size cropping to 224
224 pixels. We follow the standard Nesterov SGD optimizer with momentum 0.9 and a mini-batch of 256. The cosine learning rate scheduler is employed for better convergence and the initial learning rate is set to 0.1. For different scale models, We use different weight decays, 0.0001 for ResNet and 0.00004 for MobileNet. All models are trained for 120 epochs from scratch.
CIFAR-100. The CIFAR-100 datasets  consist of 60,000 color images of 10, 000 classes. They are split into the training set and testing set by the ratio of 5:1. Considering the small size of images () in CIFAR, we follow the same setting as 
to construct our DMNNs for a fair comparison. We augment the input image by padding 4 pixels on each side with the value of 0, followed by random cropping with a size ofand random horizontal flipping. We train the network using SGD with the momentum of 0.9 and weight decay of 0.0001. The mini-batch size is set to 256, and the initial learning rate is set to 0.1. We train the networks for 200 epochs and divide the learning rate by 10 twice, at the 100th epoch and 150th epoch respectively.
4.2 Performance Analysis
We compare our method with ResNet , ResNeXt , MobileNetV2 , pruning method  and other dynamic inference methods [36, 29]. We denote as the number of sub-blocks of each block, as the FLOPs target rate.
|Model||Top-1 Err. (%)||Params ()||FLOPs ()||FLOPs Ratio (%)|
|MobileNet V2 ||28.0||3.47||-||-|
|MobileNet V2 (ours)||28.09||3.50†||0.30||100.0|
|S-MobileNet V2-0.75 ||31.1||2.7||0.23||76.7|
|MobileNetV2 (1.4) ||25.3||6.06||-||-|
|MobileNetV2 (1.4) (ours)||25.30||6.09†||0.57||100.0|
|DMNN-MobileNetV2 (1.4), ||26.03||6.29||0.42||73.7|
|DMNN-MobileNetV2 (1.4), ||25.53||6.29||0.47||82.5|
|DMNN-MobileNetV2 (1.4), ||25.26||6.29||0.52||91.2|
Our implementation of MobileNet V2 is based on PyTorch and its parameter quantities are counted by PyTorch Summary .
Performance on heavy networks. Tab. 1 shows that our DMNN achieves remarkable results compared to other heavy models on ImageNet. First of all, we compare DMNN with ResNet. When , , our DMNN-50 achieves similar performance with ResNet-50 but saves more than 42.4% FLOPs. When we set , , our DMNN-50 further reduces 1.19% Top-1 error while still saving 19.9% FLOPs. Our DMNN-101 outperforms ResNet-101 and save 45.1% FLOPs in the same time when we set , . The above comparison demonstrates that DMNN can greatly reduce FLOPs and improve the accuracy when compared to the models with similar parameters. On the other hand, DMNN-50 achieves even better performance than origin ResNet-101 (closely to ResNet-101 by our implementation), with 42.0% parameter size reduction, which indicates that DMNN can greatly save the parameters and is feasible for pratical model deployment.
Then, we make comparason with DMNN and stronger baseline models ResNeXt. As we set , our method is superior to ResNeXt-50 () in both FLOPs and accuracy. When we set , , our DMNN-50 reduces 0.28% Top-1 error while still saves 24.5% FLOPs. Our DMNN-101 outperforms ResNet-101 and save 45.1% FLOPs in the same time when we set , . Similar result can be found while comparing to ResNeXt-101 () if we set . DMNN shows great superiority over ResNeXt mainly because of better control for different convolution groups.
Our method outperforms ConvNet-AIG  in both accuracy and computational complexity, demonstrating that multi-path design is more elaborate and superior than roughly skipping the whole block. Especially, our DMNN-50 with and achieves comparable performance with ConvNet-AIG yet greatly reduces the FLOPs by approximately 33.3%. Fig 4 shows the trade-off between computational cost and accuracy of our DMNN while comparing to other dynamic inference methods including slimmable neural network S-ResNet . Meanwhile, as an end-to-end method, DMNN shows great advantages over post process method such as pruning methods.
We also conduct experiments on CIFAR-100 dataset, as shown in Tab. 3. It can be seen that DMNN-50 with and can even outperform ResNet-50 by 1.4% on CIFAR-100 with only 78.7% FLOPs.
Performance on lightweight networks. We apply DMNN to lightweight network MobileNetV2, as shown in Tab. 2. Although similar conclusions can be obtained, it is normal that the improvements is not as large as heavy models because of the compact structures. Specially, our DMNN with and can save 10.0% FLOPs and achieves better top1 error than MobileNetV2. The proposed method is also better than other dynamic inference methods.
In summary, our method performs superbly in accuracy and computational complexity for both heavy and lightweight networks, which demonstrates its great applicability to different networks and robustness on different datasets.
|Model||FLOPs ()||Top-1 Err. (%)|
|Method||PREV||CAT||Top-1 Err. (%)|
4.3 Ablation Study
Effectiveness of the gate controller. In order to show the effectiveness of the controllers, we conduct four groups of experiments on ImageNet dataset with different configurations. Tab. 4
shows the comparison of different models. If we employ previous features and supervised learning separately, additional promotions are obtained. After aggregating these two improvements, we can boost the performance by 0.68%, demonstrating the benefits of previous state information embedding and supervised learning of controllers. It is worth noting that it only introduces a fully connected layer with 32 hidden neurons while applying previous controller’s features, the additional computation cost can be omitted. The supervised learning of the controllers may generate minor additional computational cost during training, yet it will be removed at the testing stage.
The impact of and . We adopt different values of and to explore their impacts on the performance. As shown in Tab. 1, we set , while keep on DMNN-50. The model with obtains the lowest test error rate, indicating that bigger can lead to more path selection choices and consequently better performance. We further keep and change to 0.4, 0.5 and 0.6 respectively. Larger leads to more computational cost that verifies the effectiveness of our resource-constrained mechanism. The model with larger FLOPs rate gains higher performance since more computation units are involved. The DMNN can achieve a better accuracy-efficiency trade-off in terms of the computational budgets. We have not conducted more experiments on larger due to resource limitation. But we will explore the characteristic of DMNN with larger in the future work.
Visualization of dynamic inference paths. The inference paths vary across images, which leads to different computation cost. Fig. 5 shows the distribution of FLOPs on the ImageNet validation set using our DMNN-50 model with , . The proportion of images with FLOPs in the middle is the highest, and images do occupy different computing resources guided by computational constraint. We further visualize the execution rates of each sub-block within the categories of animals, artifacts, natural objects, geological formations as shown in Fig. 7. We can see that some sub-blocks, especially at the first two blocks of the network, are executed all the time and the execution rates of other sub-blocks vary from categories. One reason could be that different categories share the same shallow layers’ features which are important for classification. As the layer goes deeper, the semantic information of the features becomes stronger, which depends on categories.
Visualization of “easy” and “hard” samples. We find that even samples of the same category would have different inference paths. A reasonable explanation is that hard samples need more computation than easy ones. Fig 6 shows examples of easy and hard samples with different actual FLOPs. Although for some classes such as malamute and lifeboat, the “hard” samples are difficult than “easy” ones, for most classes, the quality gap is not indeed noticeable. We infer that it is because the definition of easy and hard samples mainly depends on the representation property of the neural networks, rather than on the intuition of human beings.
In this paper, we present a novel dynamic inference method called Dynamic Multi-path Neural Network (DMNN). The proposed method splits the original block into multiple sub-blocks, making the network become more flexibility to handle different samples adaptively. We also carefully design the structure of the gate controller to get reasonable inference path, and introduce resource-constrained lose to make full use of the representation capacity of sub-blocks. Experimental results demonstrate the superiority of our method.
-  S. Chandel. sksq96/pytorch-summary, Sep 2018.
-  Y. Cheng, F. X. Yu, R. S. Feris, S. Kumar, A. Choudhary, and S.-F. Chang. An exploration of parameter redundancy in deep networks with circulant projections. In Proceedings of the IEEE International Conference on Computer Vision, pages 2857–2865, 2015.
-  M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
-  J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
X. Dong, J. Huang, Y. Yang, and S. Yan.
More is less: A more complicated network with less inference
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5840–5848, 2017.
-  P. F. Felzenszwalb, R. B. Girshick, and D. McAllester. Cascade object detection with deformable part models. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2241–2248. IEEE, 2010.
-  M. Figurnov, M. D. Collins, Y. Zhu, L. Zhang, J. Huang, D. Vetrov, and R. Salakhutdinov. Spatially adaptive computation time for residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1039–1048, 2017.
-  M. Figurnov, A. Ibraimova, D. P. Vetrov, and P. Kohli. Perforatedcnns: Acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems, pages 947–955, 2016.
X. Glorot, A. Bordes, and Y. Bengio.
Deep sparse rectifier neural networks.
Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323, 2011.
-  E. J. Gumbel. Statistical theory of extreme values and some practical applications: a series of lectures, volume 33. US Government Printing Office, 1954.
-  S. Han, H. Mao, and W. J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Y. He, X. Zhang, and J. Sun. Channel pruning for accelerating very deep neural networks. In International Conference on Computer Vision (ICCV), volume 2, 2017.
-  G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
-  A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
-  G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger. Multi-scale dense networks for resource efficient image classification. arXiv preprint arXiv:1703.09844, 2017.
-  Y. Ioannou, D. Robertson, J. Shotton, R. Cipolla, and A. Criminisi. Training cnns with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744, 2015.
-  B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2704–2713, 2018.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  N. Ma, X. Zhang, H.-T. Zheng, and J. Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. arXiv preprint arXiv:1807.11164, 2018.
-  P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016.
-  A. Polino, R. Pascanu, and D. Alistarh. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668, 2018.
-  PyTorch. torchvision.models¶.
-  M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
-  S. Teerapittayanon, B. McDanel, and H. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd International Conference on Pattern Recognition (ICPR), pages 2464–2469. IEEE, 2016.
-  A. Veit and S. Belongie. Convolutional networks with adaptive inference graphs. In European Conference on Computer Vision, pages 3–18. Springer, 2018.
P. Viola and M. J. Jones.
Robust real-time face detection.International journal of computer vision, 57(2):137–154, 2004.
-  X. Wang, F. Yu, Z.-Y. Dou, and J. E. Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. arXiv preprint arXiv:1711.09485, 2017.
-  Y. Wei, X. Pan, H. Qin, W. Ouyang, and J. Yan. Quantization mimic: Towards very tiny cnn for object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 267–283, 2018.
-  W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, pages 2074–2082, 2016.
-  Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. Feris. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8817–8826, 2018.
-  S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 5987–5995. IEEE, 2017.
-  J. Yu, L. Yang, N. Xu, J. Yang, and T. Huang. Slimmable neural networks. arXiv preprint arXiv:1812.08928, 2018.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
X. Zhang, X. Zhou, M. Lin, and J. Sun.
Shufflenet: An extremely efficient convolutional neural network for mobile devices.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018.