1 Introduction
Recently, large and deep neural networks have demonstrated extraordinary performance on various computer vision and machine learning tasks
[1, 2]. We notice that, the inference structures, execution procedures, and computational complexity of existing deep neural networks, once trained, are fixed and remain the same for all test images, no matter how much they have been optimized speedwise. In this work, our goal is to develop a new progressive framework for deep neural network such that a single model can be evaluated at different performance levels with different computational complexity requirements. This singlemodelvariablecomplexity property is very important in practice. We recognize that different images have different complexity levels of visual representation and different difficulty levels for visual recognition. For simple images with low visual recognition complexity, we can easily classify the image or recognize the object with simple networks at very low complexity. For example, it will be very easy to detect a person standing in front of a clean background or classify if it is male or female. For harder images, we will need to extract sophisticated visual features using more layers of network representation to gain sufficient visual discriminative power so that the object can be successfully distinguished from those in other classes.
To validate this observation, we conducted the following interesting experiment. We collected 16 different pretrained deep neural networks, ranging from very lowcomplexity MobileNet [3], to very highcomplexity DenseNet201 [2]. We use these networks to classify the images in the ILSVRC2012 [1] 50k validation set. Fig. 1 (left) shows the classification results. Each row corresponds to a specific network. The horizontal axis represents the index of the test image. A blue line indicates that the image is successfully classified by the network or ranked first in all images. A magenta line indicates that the correct result is within the top 2 to 5 categories. A yellow line indicates that the correct result is beyond the top 5 results. As summarized in Fig. 1, we can see that 38% of images are always correctly classified by all networks, no matter how simple the network is. These are the socalled simple images with low visual recognition complexity. This suggests that, if we can successfully identify those set of simple images, we can save a lot of computational resources by choosing simple networks to analyze them. More excitingly, if we are able to model or predict the visual recognition complexity of the input image and if we are able to establish a progressive network, we can then adapt the network complexity during runtime according to the visual recognition complexity of the input image. This will allow us to save the computational resources significantly.
Let us look at one more example. Fig. 2 shows two images, a simple Ice Cream image and a hard Siamese Cat image with occlusion. We choose 5 networks with different computational complexity. The most complex network is the DenseNet201 [2] labeled with 100%. The complexity levels of the other four networks are shown in approximate relative percentage in Fig. 2. For example, the inference cost of the first network is about 5% of DenseNet201. From this experiment, we can see that, for the simple Ice Cream image, the confidence score for Ice Cream is much higher than other object categories, even with very simple networks. However, for the hard Siamese Cat image, the score distribution is more uniform for lowcomplexity networks. As the network becomes more complex and more powerful, the classifier is more and more confident about the result since the score for the correct label is getting much higher than other categories.
These two experiments strongly suggest that it is highly desirable to establish a progressive deep neural network structure which is able to adapt its network inference structure and computational complexity to images with different visual recognition complexity. This progressive structure should be able to scale up its discriminative power and achieve higher recognition capability by activating and executing more analysis units of the deep neural networks and accumulating more visual evidences for more challenging vision analysis tasks or image sets.
The major contributions of this work can be summarized as follows. (1) We have successfully developed a multistage progressive structure, called ProgNet, for deep neural networks, with each stage being separately evaluated for output decision and all stages being activated in a sequential manner with progressively increased complexity and visual recognition power. (2) We present different structures for progress network design and develop a Confidence Analysis and Decision Policy network to learn the classification confidence function for the progressive network and make runtime complexityaccuracy decision for each input image. (3) Our extensive experimental results demonstrate that the proposed progressive framework for deep neural networks is able to outperform existing stateoftheart networks or models, from MobileNet to DenseNet, using one single model and its complexity can be adaptively controlled. This progressive framework will provide an important and useful tool for practical deep neural network design and resource allocation in realtime applications.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 provides a conceptual overview of the proposed progressive framework. Detailed design and methods for progressive networks are presented in Section 4. Section 5 presents our confidence analysis and decision policy network. Experimental results are presented in Section 6. Section 7 concludes the paper.
2 Related Work
This is work is closely related to complexity control / optimization and confidence analysis of deep neural networks.
(A) Complexity Optimization of Deep Neural Networks. Deep neural networks often involves high computational complexity. A number of methods have been developed to accelerate inference speed of deep neural networks, or reduce its computational resource requirement so that they can operate on lowerend devices, such as CPUs, embedded processors and mobile devices. Network parameter pruning and quantization are two effective approaches. Gong et al. [4] and Wu [5]
applied kmeans scalar quantization to pretrained parameters. Significant speed up can be achieved by 8bit integer and 16bit floating point quantization as shown in
[6] and [7]. Parameter pruning approaches [8, 9, 10] can be used to reduce network complexity. Low rank factorization and decomposition [11], transferred learning and compact convolutional filter learning
[12, 13], and knowledge distillation [14, 15, 16] methods have been developed to reduce the network complexity. NoScope [17]tackled the problem of very expensive surveillance video object detection by using a shallow and fast CNN as a early estimator and dispatcher. Only complex scenes with significant interframe changes will require a full run of deep object detection network, therefore higher analysis speed can be achieved. All of these networks aim to optimize the computational complexity and inference speed of deep neural networks, often at the cost of degraded visual recognition performance. Their inference structures, execution procedures, and computational complexity, once trained, are fixed and remain the same for all test images. They are not able to adapt their network inference structure and complexity to different resource supplies and input images.
(B) Confidence Analysis for Deep Neural Networks. Researchers have recognized that the decision scores of existing deep neural networks are poorly calibrated [18]. For example, higher scores often do not correspond to better or closer samples. [19]
argues that probability scores generated by
softmax should not be considered as confidence score or distance measure directly since they are based on the norm of presoftmaxinputs. To address this issue, various methods using scalar, vector, matrix and binning methods to calibrate the confidence scores produced by the
softmax function. Gaussian density modeling is proposed in [19] as a post prediction calibration using prior information of the training data. Gal et al. [20] has implemented a randomized dropout network [21] to estimate the uncertainty level of network prediction. The open set deep network approach in [22] attempts to measure uncertainty contributed from unknown categories. It should be noted that these methods are based on statistical modeling, being optimized on the entire validation set, and therefore not suitable for confidence analysis for each individual input image.3 Method Overview
Fig. 3 provides an overview of the proposed framework for deep neural networks. In this work, we divide network into stages with deep neural network units, , . Each unit
consists of a set of network layers, including convolution, pooling, ReLU, etc. The output of unit
is a feature vector . At decision stage , we use the feature to generate the decision output, i.e., the classification result for the input image, using an evaluation network. The evaluation network consists of a set of network layers, including convolution, pooling, fully connected layers, and softmax layers. At stage
, feature generated from unit and feature generated from unit are fused together using a fusion network to produce the fused feature , which will be used by the evaluation network to produce the decision output. The fusion network consists of feature concatenating followed by convolution layers for normalization. The fused feature produced at stage 2 will be forwarded to stage . At stage , the network unit will produce feature , which will be fused with from previous layers to produce the fused feature . We can see that the proposed progressive deep neural network is able to accumulate and fuse more and more visual features, scale up its visual recognition power by activating more network units and certainly consuming more computational resources. Certainly, the network inference can be terminated at any stage.Let be the output result produced by the evaluation network . Certainly, at later stages, the progressive network is able to accumulate more visual features or evidences for classification, its classification accuracy or visual recognition power will be higher, and the uncertainty for decision will be lower. So, we need a carefully designed module, called Confidence Analysis and Decision Policy (CADP) network to analyze the output results
from each stage and its previous stage. It will decide if the current decision is reliable enough with early termination of the inference process or we need to proceed to the next stage to gather more visual evidence. In this work, the CADP network is realized by a recurrent neural network (RNN) learned from the training data.
The task the CADP network is twofold: (1) it needs to generate the decision of classification or other visual recognition tasks at the current stage using the evaluation results from the current and previous stages. (2) It needs to learn an optimal decision policy for early termination such that the overall computational complexity is minimized while maintaining the stateoftheart classification accuracy achieved by existing nonprogressive networks. Let , be the input image. We define
(1) 
We denote the decision policy in the CADP network by . The CADP network decides that image be terminated at stage . Let be the computational complexity of stage . Then the computational complexity for evaluating the input image will be
(2) 
The overall accuracy for all test images will be given by
(3) 
Therefore, the optimal policy to be learned by the CADP network aims to minimize the overall complexity while maintaining the target accuracy:
(4) 
In the following sections, we will present our progressive deep neural network design and explain our method to learn the CADP network.
4 Progressive Deep Neural Network Design
The concept of progressive inference is different from traditional network inference. It must be able to produce a sequence of complete prediction results. Early stage of the network should have small computational complexity. Besides, the features and results from previous stages should be reused and accumulated. As discussed in [23], the overall computation required by standard convolutional layers in a CNN is given by:
(5) 
where , , and are kernel size, input feature map size, input and output channels of layer , respectively. To change the values of and , we can choose different building blocks, such as residual [24] and dense [2]. In complexityprogressive network design, we focus on the rest two sets of parameters: channels(,) and layers(L), which dominate the overall complexity since their values are typically very large.
It can be seen that these two sets of parameters represent two different dimensions, corresponding to two different dimensions for network partition: horizontal and vertical partitions. This leads to two different structures, parallel and serial structures, for progress deep neural network design, which will be explained in the following.
(A) Parallel structures. In the parallel structure for deep neural networks, we partition the network into multiple stages by reducing the input and output channel size, and . let be the downsampling ratio of and . The complexity of convolution layers can be then reduced by according to Eq. (5). As shown in Fig. 4 (left), at stage , we use a thin network with small input and output channel sizes. The depth of the network could be as large as the original nonprogressive network. We use existing building blocks developed in the literature. Available choices are residual [24], residual bottleneck [25], dense [2], inception [26], inceptionresidual [27] and NasNet [28] blocks. Similar to [29], a Reduction block
contains stride 2 convolution or downsampling layers used to reduce spatial size by a factor of 2. A
normal block keeps spatial dimension intact. In this parallel structure, the input image is analyzed by different network units with different channel sizes. The features generated by different units are fused and accumulated together using a concatenate operator.(B) Serial structures. One limitation of the parallel structure is that the width of each unit or branch cannot be reduced to arbitrary numbers. In our experiments, 4 and 8 are the minimum effective width of each unit on the CIFAR [30] and ImageNet [1] datasets, respectively, in order to maintain sufficient representation capacity. The serial structure partitions the network along the dimension of layers . As shown in Fig. 4
(right), we extract features from different layers of the network, apply global pooling to them, and use a fully connected layer to generate the output decision. Also, this feature is concatenated with features extracted from next layers to be used for decision at the next stage. We can see that in this serial structure, the complexity of different stages increases linearly with the layer depth
.Designing and successfully training the progressive network structure is a challenging task. Specifically, we need to make sure: (1) the overall accuracy at stage achieved by the evaluation network is increasing with . Otherwise, additional computational computational resources have been wasted. (2) When we apply the full complexity, i.e., evaluate each input using the whole network, we need to make sure that it is more complexityaccuracy effective than existing stateoftheart networks.
Following the work in [31, 32]
for multiclass classification, we use the Cross Entropy loss as our joint loss function:
(6) 
where and are weight and output from stage , respectively. is the groundtruth label. If not otherwise required by targeting application, we treat each stage with uniform weights (), i.e., outputs from all stages are equally important.
5 Confidence Analysis and Decision Policy Learning
In the previous section, we have introduced the ProgNet that can perform network prediction at a sequence of stages. At each stage, ProgNet needs to determine if the current evaluation output is reliable or confident enough and if it is necessary to proceed to the next stage for accumulate more visual evidences. During our experiments, we found out that the decisions at different stages are interdependent with each other. Specifically, the current stage needs to examine the evaluation results in all previous stages for effective decisions. To address issue, we propose to design a recurrent neural network (RNN) to learn the confidence analysis and decision policy (CADP). As illustrated in Fig. 5, the RNN CADP network uses the presoftmax outputs in the evaluation networks of all previous stages to learn the confidence estimation and decision policy at the current stage.
More specifically, for each input (usually a minibatch of images), a RNN controller takes as input the class presoftmax output from the current CNN classifier, and produces outputs, with new categorization results and postsigmoid confidence estimation. Postsigmoid confidence score is compared with user defined threshold in range to determine whether output is emitted directly from RNN classification results, otherwise another stage of CNNRNNdecision is required.
Optimizing the RNN controller is the most challenging problem in this work. For each image, with a userdefined threshold , the objective of CADP network is to solve the following optimization problem: minimizing the overall error rate subjected to a computational complexity constraint:
(7)  
(8)  
subject to  (9) 
where is the number of stages and is the normalized computational cost of th DNN unit. is a constant number once ProgNet is composed. is Heaviside(unit) step function which is for positive inputs and for negative inputs. is the correctness score ( if correctly classified, otherwise) of the th stage which is the last rejected stage by . Finding is equivalent to finding the first index where and as shown in Eq. (8). Without loss of generality, we approximate the confidence integral using summation over discrete samples of within the range of . The above optimization problem becomes :
(10) 
Combining standard cross entropy loss and , we have the following optimization objective function:
(11) 
where and are weights and bias of the RNN controller, respectively. is the groundtruth of th image, is the hyperparameter controlling weights of classification and confidence losses. While it is possible to directly optimize Eq. (11) with constraints Eq. (9) using the method in [33], we found that it is more efficient and robust to convert the problem into:
(12)  
(13) 
where is the optimal confidence score. The problem in Eq. (12)can be solved with the Constrained Optimization by Linear Approximation algorithm [34]. Eq. (13) can be solved using backpropagation with a standard stochastic gradient descend (SGD).
Note that the desired in Eqs. (10), (11), and (12) is the output from the RNN classifier, while it is possible to update
after each batch or epoch, it is a very expensive process. In this work we are using the outputs from the evaluation networks for the first
controller training epochs. We then update using the latest from the controller output, and continue training controller for the rest epochs. We implement the RNN controller using a threelayer LSTM, stacking 3 LSTM cells with 2 Dropout [21] layers in between. Before each forwarding step, internal states of controller are initialized with zero inputs. More training details will be provided in in Section 6.6 Experimental Results
In this section, we evaluate our proposed ProgNet using the CIFAR10 [30] and ImageNet(ILSVRC2012) [1] datasets. On both datasets, our goal is to train a single ProgNet model which provides progressive complexityaccuracy scability while outperforming existing stateoftheart networks in terms of complexityaccuracy performance. All ProgNet models are trained on AWS P3 8x large instances ^{1}^{1}1https://aws.amazon.com/cn/ec2/instancetypes/p3/ with 4 Tesla V100 GPUs. Testings and runtime benchmarks are executed on local workstation with 1 Intel Xeon(R) CPU E51620 v3 @ 3.50GHz and 4 Pascal Titan GPUs. We implement the ProgNet using Gluon ^{2}^{2}2https://mxnet.incubator.apache.org/tutorials/gluon/gluon.html imperative python APIs with MXNet backend [35]. All reference networks for performance comparison are also benchmarked using MXNet if not otherwise specified.
(A) Network Configurations. Both parallel and serial structures of the ProgNet are flexible and highly expandable. In this work, we conducted extensive experiments using three different network settings for CIFAR10 and two different settings for ImageNet, which are summarized in Table 1.
(B) Network Training and Inference. The base classifier and LSTM controller in ProgNet are trained separately using SGD optimization. For the base network, we use a batch size of 256. The number of training epochs for CIFAR10 and ImageNet are 350 and 180, respectively. Following [2, 29, 28], we use an initial learning rate 0.1, weight decay 0.0001, and momentum 0.9. The learning rate is lowered by a factor of 10 at 25%, 50% and 80% of total epochs. Parameters with the best mean accuracy of all DNN units are saved as our best model. We then start training the LSTM controller using this best model. We sample the early termination threshold . and are excluded because are considered as no network inference and stands for full network inference. The controllers are optimized using SGD with a learning rate 0.5, weight decay 0 and momentum 0 for all experiments.
To evaluate the impact of RNN controllers during inference, we conducted experiments using the following two modes: dynamic and fixed. In the dynamic mode, users can specify the confidence as early termination threshold. This is the desired behavior in this work. For comparison, we allow the ProgNet to follow preset stage setting and run in fixed mode. Once set, The ProgNet acts as a nonprogressive network.
In our experiments, we evaluate two different progressive structure: parallel and serial, with different stages, such as 4, 6, and 9 stages. We also evaluate two different modes, dynamic and fixed, for the CADP controller. For example, in our figures, the notation p6dynamic indicates that the network has a parallel structure with 6 stages and a dynamic CADP mode. s9fixed indicates that the network has a serial structure with 9 stages and a fixed CADP mode.
6.1 Experimental Results on the CIFAR10 Dataset
In our CIFAR10 experiments, we follow previous practice [24] to augment the data. We preprocess the training images by converting them to float32, followed by upsampling and random cropping to . Random horizontal flipping is applied during training as a common strategy. The validation data is processed by converting to float32 without augmentation. We set the batch size to be 50. For a given early termination threshold , the confidence analysis and decision policy network in ProgNet will decide when to stop the network inference for the input image. Let be the corresponding average network inference time of ProgNet. We also record the overall error rate by . Fig. 6 shows the curves the following network settings: p6dynamic, p4dynamic, and s9dynamic. We shows the results on CPU (left) and GPU (right). We include results for ProgNet with fixed controller modes (shown in solid boxes). For comparison, we also include the complexityaccuracy results for stateoftheart networks on the CIFAR10 dataset: resnet20, resnet10, densenet100/12, densetnet100/24. It can be seen that our proposed ProgNet outperforms existing networks in the complexityaccuracy performance using one single model. As we increase the number of stages, we can achieve a large complexityaccuracy scalability range. The parallel structure is slightly better than the serial structure.
6.2 Experimental Results on the ImageNet
Following the same procedures outlined in existing papers [25, 2, 36, 29, 28], we conduct the experiments of ProgNet training and testing on the ILSVRC 2012 [1]. We use an image size . Training images are augmented with random cropping with min/max aspect ratio of and
. Random horizontal flipping is used in training. For training and validation, pixel means of [123, 117, 104] are subtracted from images and then normalized by standard deviations of [58.4, 57.12, 57.38]. As in
[24, 25, 29], we use 1.28 million images for training and 50000 images for testing.Similar to the above CIFAR10 experiments, we record the complexityaccuracy curve for different ProgNet structures (parallel and serial) with different stages (4, and 6 stages) using different controller modes (dynamic and fixed). Fig. 7 shows these curves for CPU (left) and GPU (right). We also include complexityaccuracy results achieved by existing networks, including resnet18, resnet50, densenet121, mobilenetv1, mobilenetv2. Table 2 summarizes the complexityaccuracy performance of ProgNet in comparison with existing networks. We include results on the number of model parameters, number of MACC (MultiplyAccumulation) operations, and running times on CPU and GPU. It can be seen that our proposed ProgNet outperforms existing networks in the complexityaccuracy performance by a large margin using one single model. For the same complexity, both ProgNet variants outperform the MobileNet [3, 23], which has been significantly optimized, by 2% to 3% in classification accuracy. For the same accuracy, the ProgNetp4dynamic model is able to achieve 20% less inference time than MobileNetv2.
In our ProgNet design, the confidence analysis and decision policy (CADP) network plays a critical role since it controls whether the next network stage should be activated for the input image or its inference should be early terminated. This has a direct impact on the complexityaccuracy performance of our ProgNet. To further understand the behavior and capability of the CADP controller, we implement a random controller in which the network inference of an input image is terminated at a random stage. We then run the experiment using this random controller on the ImageNet dataset for 1000 times to simulate partial bruteforce search for the best control policy. In Fig. 8, one dot represents the complexityaccuracy result for one experimental run. The solid curve represents our CADP controller. We can see that the CADP is very effective, outperforming static control.
7 Conclusion
In this work, we have successfully established a progressive deep neural network structure which is able to adapt its network inference structure and computational complexity to images with different visual recognition complexity. This progressive structure is able to scale up its discriminative power and achieve higher recognition capability by activating and executing more analysis units of the deep neural networks for more difficult vision analysis tasks or image sets. We have developed a multistage progressive structure, called ProgNet, for deep neural networks, with each stage being separately evaluated for output decision and all stages being activated in a sequential manner with progressively increased complexity and visual recognition power. We developed a recurrent neural network method to learn the confidence analysis and decision policy for early termination. Our extensive experimental results on the CIFAR10 and ImageNet datasets demonstrated that the proposed ProgNet is able to obtain more than 10 fold complexity scalability while achieving the stateoftheart performances with a single network model.
References
 [1] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., FeiFei, L.: ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3) (2015) 211–252

[2]
Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.:
Densely connected convolutional networks.
In: Proceedings of the IEEE conference on computer vision and pattern recognition. Volume 1. (2017) 3
 [3] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
 [4] Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014)

[5]
Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.:
Quantized convolutional neural networks for mobile devices.
In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 4820–4828 
[6]
Vanhoucke, V., Senior, A., Mao, M.Z.:
Improving the speed of neural networks on cpus.
In: Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. Volume 1., Citeseer (2011) 4
 [7] Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: International Conference on Machine Learning. (2015) 1737–1746
 [8] Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with backpropagation. In: Advances in neural information processing systems. (1989) 177–185
 [9] LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in neural information processing systems. (1990) 598–605
 [10] Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain surgeon. In: Advances in neural information processing systems. (1993) 164–171
 [11] Tai, C., Xiao, T., Zhang, Y., Wang, X., et al.: Convolutional neural networks with lowrank regularization. arXiv preprint arXiv:1511.06067 (2015)
 [12] Cohen, T., Welling, M.: Group equivariant convolutional networks. In: International Conference on Machine Learning. (2016) 2990–2999

[13]
Shang, W., Sohn, K., Almeida, D., Lee, H.:
Understanding and improving convolutional neural networks via concatenated rectified linear units.
In: International Conference on Machine Learning. (2016) 2217–2225  [14] Buciluǎ, C., Caruana, R., NiculescuMizil, A.: Model compression. In: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, ACM (2006) 535–541
 [15] Ba, J., Caruana, R.: Do deep nets really need to be deep? In: Advances in neural information processing systems. (2014) 2654–2662
 [16] Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
 [17] Kang, D., Emmons, J., Abuzaid, F., Bailis, P., Zaharia, M.: Noscope: optimizing neural network queries over video at scale. Proceedings of the VLDB Endowment 10(11) (2017) 1586–1597
 [18] Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. arXiv preprint arXiv:1706.04599 (2017)
 [19] Subramanya, A., Srinivas, S., Babu, R.V.: Confidence estimation in deep neural networks via density modelling. arXiv preprint arXiv:1707.07013 (2017)
 [20] Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In: international conference on machine learning. (2016) 1050–1059
 [21] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15(1) (2014) 1929–1958
 [22] Bendale, A., Boult, T.E.: Towards open set deep networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 1563–1572
 [23] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. arXiv preprint arXiv:1801.04381 (2018)
 [24] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. (2016) 770–778
 [25] He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: European Conference on Computer Vision, Springer (2016) 630–645
 [26] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 2818–2826

[27]
Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.:
Inceptionv4, inceptionresnet and the impact of residual connections on learning.
In: AAAI. Volume 4. (2017) 12  [28] Zoph, B., Vasudevan, V., Shlens, J., Le, Q.V.: Learning transferable architectures for scalable image recognition. arXiv preprint arXiv:1707.07012 (2017)
 [29] Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578 (2016)
 [30] Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. (2009)
 [31] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. (2012) 1097–1105
 [32] Simonyan, K., Zisserman, A.: Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556 (2014)
 [33] Pathak, D., Krahenbuhl, P., Darrell, T.: Constrained convolutional neural networks for weakly supervised segmentation. In: Proceedings of the IEEE international conference on computer vision. (2015) 1796–1804
 [34] Powell, M.J.: A view of algorithms for optimization without derivatives. Mathematics TodayBulletin of the Institute of Mathematics and its Applications 43(5) (2007) 170–174
 [35] Chen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., Zhang, Z.: Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (2015)
 [36] Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083 (2017)
Comments
There are no comments yet.