In the last few years, Deep Convolutional Neural Network (DCNN) for mobile platforms, which assumes certain constraints on computational capacity and battery, has been experimentally proven to be a successful approach in a wide variety of visual tasks in machine vision [20, 16, 43, 42, 52, 33]. Many compressed neural networks were proposed such as pruning [13, 12, 29, 49, 50, 31] and quantization [1, 54, 25]. Binary Neural Networks (BNNs) recently have attracted many interests and achieved significant improvements [39, 15, 32, 3, 45]. Prior works focused on binarizing large ConvNets which often contain several millions of parameters. On the other hand, compact neural network (e.g., MobileNet ) is among promissing network architectures for binarization. The MobileNet exploits light-weight depth-wise and point-wise convolution layers to leverage the network efficiency when deploying on mobile devices. However, it is not trivial to make the depth-wise operators capable of coping with 1-bit quantization to push the network more compact.
With the depth-wise convolution, the neural network achieves low inference latency and even more optimal when being 1-bit quantized. However in such a binarization, input of the convolutional layers are channel-wise multiplied and summed. Therefore, the output values are limited within a narrow range. For instance, a binary depth-wise filter convolving with one channel of the input yields values in , degrading the representation capability of the binary neural networks. On the other hand, the binary vanilla convolution results in a larger output value range which allows to attain an abundant feature representation and to effectively preserve the distribution of the data samples through network layers. Figure 1
illustrates the principle of this perspective. Although being effective, the vanilla convolution comes with an expensive computational cost since the filter convolves all channels of the input tensor. Therefore, the replacement of either depth-wise or vanilla by group convolution appears to be a promising approach to compensate the trade-off between the neural network latency, feature representation capability and computational resource constraint.
Group convolution is a simple yet efficient operation used in various neural networks to optimize trainable network parameters as well as the computation. AlexNet , ConDenseNet , ResNeXt , etc. are among popular neural architectures exploiting the group convolution and achieving great outcomes. At its extremity appears the depth-wise convolution. Inside the depth-wise, each channel is a separate group, or in other words the number of groups is exactly the depth of the input tensor. Vanilla convolution is also a special case of group convolution where . In most of the networks having group convolutional layers, the number of groups is often homogeneous at different layers in the network. From our perspective, a heterogeneous scheme to distribute groups at different layers can help to construct an efficient and accurate neural architecture, intuitively assuming a non-homogeneous feature representation through network layers.
Aiming at leveraging the effectiveness of the heterogeneous group convolution, in this paper we propose a novel weight-sharing mechanism to explore in group search space optimally compact binary neural architectures that work efficiently and accurately. The key idea is to formulate the searching as an optimization problem that seeks to create a new genre of the compact architecture. This network is expected to be capable of performing efficiently in challenging and complicated tasks of image classification when data volume is huge and objects are diverse in types. Instead of conducting the search in a convolutional operation space with high degree of intractability as in neural architecture search (NAS) [55, 37, 48, 27], we exploit a controllable search space of group convolution in a MobileNet structure consisting of 13 layers , resulting in a potentially compact yet efficient binary architecture.
The main contributions of this paper are threefold:
We introduce a novel construction of binary neural network that is one of the first studies searching for a potential architecture design via a heterogeneous combination of group convolutional layers. Our work sheds the light on a new direction for enhancing the capability of BNNs.
We propose an adaptive weight-sharing training mechanism that automatically searches in the group space to build efficient BNNs. More importantly, our training scheme is intuitive, flexible, and straightforward to implement.
We extensively conduct experiments to prove that following our approach, the binary neural architecture construction achieves a significant improvement factor regarding computation saving and model accuracy, therefore being able to attain state-of-the-art performance on large-scale ImageNet dataset .
2 Related Work
We have witnessed many research interests in binary neural networks. Courbariaux et al. [8, 7, 18] described the very first works to constrain full-precision weights in deep convolution neural networks to by utilizing XNOR-count operator and being able to accelerate the inference stage faster than standard convolutional operation and than cuBlas , an efficient GPU framework used for linear algebra computation. The work achieves high accuracy when benchmarking on popular datasets such as MNIST , CIFAR10 , SHVN . XNOR-Net 
is an interesting idea making use of scaling factors estimated from full-precision weights and achieving 44% Top-1 accuracy on ImageNet with AlexNet architecture. The two most related approaches to our works are Bi-RealNet  and MoBiNet . These binary models described a deployment of compact modules with skip connection and group convolution to enhance the capability of BNNs in terms of feature representation. The two models reach the state-of-the-art performance of 56% and 54% Top-1 accuracy on Imagenet respectively when binarizing both activations and weights. A recent work on BNNs  introduced Binary Optimizer to remove the dependency of binary weights from the real values, opening a new way to improve the BNNs.
To ameliorate the Binary Neural Network architecture, we adopt the methodology of Neural Architecture Search (NAS). The NAS aims at seeking to construct neural networks in an automatic instead of a manual manner. Many NAS algorithms formulate the search as an optimization problem and achieve great success [28, 5] in finding optimal architectures, assuming constraints on network latency and computational resource. In the following we focus on reviewing the neural architecture search appropriate to apply for mobile devices. Pham et al. introduced ENAS  considered as one of the first efficient neural architecture search approaches using cell-based search space. This network trains a super-graph from which sub-optimal paths are selected to create sharing parameters in sub-models. This mitigates the challenges when wandering in a huge exploration space by shrinking the search process parameters. There are other approaches outperforming manually designed networks. Liu et al.  proposed DARTS, a prominent gradient-based method that optimizes jointly one-shot models on a continuous relaxation of the search space. However because the models are assembled by a mixture from a set of operations, the performance relies heavily on the set selection. Another approach having the same flavor is ProxylessNAS 
which adapts 1-bit neural architecture to abate GPU memory usage of one-shot models. The probability to select operation edges is updated by BinaryConnect.
While NAS algorithms based on reinforcement learning and evolutionary methods strictly demand prohibitive computation with thousands of GPUs[55, 27, 40, 41, 34], single-path one-shot architecture search methods are affordable over a conditional exploration space. Guo et al.  and Chen et al.  implemented the one-shot model named SPOS and DetNAS to solve image classification and object detection problem, respectively. SPOS  delves into a random single path at every iteration to set up a super network on which the algorithm applies an evolutionary search to seek for an optimal path for neural network formation. In the one-shot network, pre-trained output can be used to transfer to different types of task like object detection and segmentation. Our proposed method has a similar flavor in training random group convolution, assuming modifications in the neural architecture with weight sharing and searching for the optimal group combinations.
3 Our Methodology
3.1 Binary Operation
In this section, we provide some fundamental background on binary neural network. When binarizing weights and activations, a typical binary neural network uses a sign function to constrain values to either or .
where is binarized value of which can be network inputs or weights. Similar to float-type neural network, 1-bit weights are intentionally computed to minimize an objective function:
is the loss function;, ,
are inputs, binary weights, and labels respectively. Because 1-bit values degrade the neural network capability of preserving feature through layers, we apply scaling factors and backpropagation scheme mentioned in XNOR-Net to tackle the training divergence issue and to enhance the binary network performance. Also, to compute gradient of non-differentiable sign function, we adapt an approximation for the derivative of the sign function with respect to the activation .
denotes a differentiable approximation function in a piece-wise polynomial function , where
The weights are only binarized in forward step for both training and testing stage, then we can apply binary xnor-popcount operator [35, 2] to accelerate the process. In backward step, real weights are stored to compute the derivatives and update new values.
3.2 Design And Search 1-Bit MobileNets
Binary neural network and neural architecture search are two among the most potential techniques used to construct compact yet efficient neural models. Network architecture design usually has a great impact on the performance of the binary networks. The main objective in our work is to explore efficient designs of BNNs with the hope that techniques in neural architecture search (NAS) can leverage the exploration for compact structures. However, the NAS often covers a huge search space of convolutional operators so that it is able to generate sub-optimal neural networks. This might be very difficult and costly when directly exploring in the binary operator space. To simplify the search space and to prevent the computation from exorbitant price, we develop a novel training procedure exploiting randomized group convolution operators with weight sharing in the neural networks. With this approach, the binary models become robust thanks to the consideration of a wide variety of group combinations which fosters the group search procedure. Exploring new architecture for binary neural networks using neural architecture search can open a potential research direction to significantly improve the binary network construction. In the next sections, we discuss how to conduct and optimize the group convolution search with our proposed training pipeline.
3.2.1 Evolutionary Group Convolution Search for Binary Neural Networks
To our knowledge, evolutionary algorithms, a.k.a genetic algorithms, base on the well-known evolution of creature species in nature. Natural selection eliminates individuals unable to adapt to the environment. Additionally, survivals are kept for reproduction, crossover, and mutation. Several recent evolutionary approaches for neural architecture are proposed[30, 46]. Instead of searching for the entire network including a complete set of connections and operators as in prior works, we conduct an evolutionary search for group values at convolutional layers to explore suitable binary structure with a simple and effective network design. At each layer, group candidatures consist of all possible divisors of the input channels. In detail, we start by sampling a list of possible groups and searching on this list to find an optimal architecture by training random groups for every iteration. The first objective is to achieve an accuracy superior than a threshold. Second, in order to make the computational cost controllable, we select binary compact models satisfying certain constraint on the maximum number of FLOPs such that
where is weight and is a group combination for each convolutional layer respectively. The search pipeline is presented in Algorithm 1.
3.2.2 Module Modification
MobileNets  with depth-wise and point-wise convolution (together known as separable depth-wise convolution ) are famous for its compactness and effectiveness when being used for designing a neural network. We modify the MobileNet structure to facilitate the creation of efficient binary neural networks that outperform prior state-of-the-art works regarding accuracy and memory saving. However, training a binary depth-wise convolution is not straightforward  because the separate channel-wise output falls into a small value range due to the nature of the computation, making the binary network impossible to converge. To overcome this issue, we propose a replacement of depth-wise convolution by group convolution to enlarge the value range of the depth-wise convolution output. More precisely, we search for groups of binary convolution operators of kernel size and . To preserve the feature representation through binary layers while assuming a low computational cost, we maintain the full precision convolution when perceiving a reduction in spatial dimension at a layer output. In addition, block-wise and layer-wise skip connections are added in case of homogeneous dimension output to benefit the network training. Our proposed network module is illustrated in Figure 3. There are three principle modifications that make our modules different from the vanilla architecture of the MobileNet :
Module 1 (M1): consists of a binary group conv and a binary full conv. A real fully conv follows when there is a spatial dimension shrinkage. (see Figure 3 - the left figure).
Module 2 (M2): uses group convolution for real full conv to further reduce the computational cost. The group is also searched along with the binary group conv.
Module 3 (M3): is made up of two binary conv layers instead of one, and then concatenate them to obtain the same dimension.
In the next section, we describe a training scheme based on randomized group through weight sharing to force the binary neural network to converge.
3.2.3 Randomized Groups via Weight-sharing
In the search stage, a fitness function (e.g., accuracy of the model) is computed to help explore optimal group combinations. However if we naively calculate accuracy of a binary model without training with data samples (i.e., images), it does not guarantee that the optimal model is able to learn the distribution of the target dataset. Therefore, to leverage important information from a given dataset for evolutionary search, we propose a method to train the binary model along with randomized group combination via weight-sharing in each training iteration. To ease the implementation, full convolutions are initialized and cropped with randomized groups in each iteration via weight-sharing. The weight-sharing is depicted in Figure 4.
3.2.4 Training Procedure
Binary neural network is an active and progressive research topic with prominent works [38, 32, 45]. Training neural networks of 1-bit weights is a difficult task because feature representation often has narrow value range which seems to be impossible to fit the target large-scale dataset for classification. XNOR-Net  utilizes full precision weight values to derive real value scaling factors which play crucial role in amplifying the magnitude of binary weights and activations. The optimization is formulated as follows:
can be weights or activations and are scaling factors. The optimal solution for Equation 6 is and . Bi-RealNet  and MoBiNet  make use of skip connections to enhance the performance of binary neural networks. With that flavor, we manipulate the skip connections together with scaling factors to facilitate the training procedure. Here follows the summary of our training:
For each iteration, we train binary neural networks with random group combinations. For instance, if the network has 13 layers, the groups corresponding to these layers are randomized within possible divisors of input channels. This randomization helps 1-bit models become robust against group changes when searching.
We train from scratch the final binary models with optimal group convolution. All steps run on large scale ImageNet-1k .
In this section, we demonstrate the performance evaluation of our proposed method. First, we describe experiment setups and implementation details. Second, to prove our weight-sharing group search mechanism more effective and reasonable than naively random search we compare the training performance with randomized groups in ablation studies. Third, we evaluate the search groups with uniform normal groups to investigate the fact that for each level of feature representation, the number of groups should be different. Then we compare with the state-of-the-arts to see improvement impacts of our proposed BNNs. Finally, the computation analysis are presented. All experiments including searching, training, and testing are conducted on the large-scale dataset of ImageNet2012-1k. We analyzed the results regarding three metrics: Top-1, Top-5 classification accuracy on ImageNet dataset, and number of FLOPs.
4.1 Experimental Setups
Dataset. The image dataset we used to demonstrate the effectiveness of our framework is ILSVRC2012 , a dataset containing 1.2M and 50K image samples for training and testing respectively. The dataset has 1000 classes. Most of the previous works such as XNOR-Net , Bi-RealNet , CI-BCNN , and MoBiNet  also used this dataset to evaluate their model performance.
Implementation details and setups. Our training pipeline consists of three main stages: train binary architecture with randomized groups, search groups for convolutional layers via evolutionary method, and train the final models with searched groups from scratch. We train on basic blocks modified from MobileNet to improve the performance of binary models, mentioned in Section 3.2.2. Each image is scaled up to . In training, images are randomly cropped to . In testing, they are centrally cropped to . When training, real-valued filters are saved in RAM to compute update values in backpropagation via Equation 3 and then are binarized in inference stage. In the first stage of the training pipeline, we used batch size of images to train the 1-bit models with random groups and learning rate of in epochs. In the search stage, the number of populations, crossovers, mutations are , , respectively and the searching runs iterations to find optimal group structure. In the end of the pipeline, we train the final models from scratch with batch size , learning rate , number of epochs . All training stages use Adam Optimizer , momentum , and update learning rate through linear decay. FLOPs are calculated following the suggestion of [32, 45] for fair comparison. Training is conducted on four RTX 2080 Ti GPUs 24GB and searching is on one GPU.
4.2 Our Search Group vs. Random Group Search and Uniform Group Architectures
To investigate our hypothesis of binarizing convolution via an evolution-based searching in MobileNet’s architecture, we compare with random search and uniform group architectures as an ablation study. The experiment is conducted on the three proposed modules in Section 3.2.2. For the Module 2, there are four full-precision convolutions. We also apply searched groups for such layers to further reduce the computational cost.
To compare with random group search, we report Top-1 and Top-5 accuracy for each epoch. The training performance comparison is indicated in Figure 5, showing the result of modifications from MobileNets: Module 1, Module 2, and Module 3 (from the top to the bottom in that order).
We run the comparison experiments of randomized and searched groups with epochs, for batch size and observe the Top-1 and Top-5 accuracy in ImageNet validation set for each epoch. With respect to the first two modules, our group search architecture training is more stable and for all epochs, we achieve more accurate results (about ) in both Top-1 and Top-5 accuracy.
Our proposed search group achieves better performance when comparing with random groups that require more computational cost. Table 1 reports the number of FLOPs when running with random group and with our proposed group search. Regardless of the fact that random architectures have a larger computational cost, our search group networks are more accurate and efficient.
We also provide a comparison with uniform group (i.e., using the same number groups for all layers of Module M1, M2, and M3) as a ablation study for investigating our hypothesis. We train models with uniform groups of (fully convolution), , and . The Table 1 presents the results of Top-1, top-5 accuracy, and number of FLOPs (computational cost).
Our reported statistics expresses a trade-off of performance between fully convolution and depth-wise convolution. For example, in M1 and M3 our searched group models outperform comparable uniform group models (g=4 and 16) in accuracy and take less FLOPs.
|#Groups (M1)||Top-1 (%)||Top-5 (%)||FLOPs|
|Groups = 1||64.51||85.14|
|Groups = 4||60.89||82.54|
|Groups = 16||58.49||80.66|
|#Groups (M2)||Top-1 (%)||Top-5 (%)||FLOPs|
|Groups = 1||64.51||85.14|
|Groups = 4||59.59||81.67|
|Groups = 16||54.23||77.04|
|#Groups (M3)||Top-1 (%)||Top-5 (%)||FLOPs|
|Groups = 1||57.56||79.85|
|Groups = 4||49.90||73.15|
|Groups = 16||45.29||69.38|
4.3 The Efficiency of Our BNNs
MobileNet architecture is a compact network working accurately and efficiently based on light-weight module of separable convolution layers. Binarizing such a compact model can give us promising outcomes because it contains less parameters thanks to the tremendous reduction of the computational cost without incurring accuracy loss. However, as mentioned in Section 1 the networks exploiting the separable convolutions including depth-wise scheme cannot convergence when being binarized because of extremely small value range that cannot adequately fit complex data samples like images. On the contrary, groups and fully convolutional layers are easier to make the networks perform well. Albeit achieving high accuracy, fully convolutional layers are not efficient to deploy on mobile devices because of a huge number of parameters. So, group convolutional layers can have potential trade-off between depth-wise and full convolution. In this work, we propose a group search mechanism via evolutionary method to find group structure at each convolutional layers for a binary neural network in the MobileNet architecture.
For showing the effectiveness of our proposed search mechanism, we conduct experiments of modified modules with different computational budget constraints. We firstly train binary models with random groups for each module in epochs. Then, we search for networks satisfying the FLOP budget to derive optimal group structures. Finally, the networks with optimal groups are trained from scratch in epochs. The other settings are mentioned in Section 4.1. Top-1, Top-5 accuracy on ImageNet-1k, FLOPs, budget constraint, and number of GPU-hours of searching are reported in the Table 2.
Compared to the full-precision MobileNet , our constructed binary neural networks accelerate approximately , , and when using Module 1, Module 2, and Module 3 respectively, while incurring small Top1-accuracy loss of , , and . We also outperform the most related work of MoBiNet . This detail is mentioned in Section 4.4. In addition, our search algorithm only takes 29h on one GPU in average.
In the next section, we compare our method with other state-of-the-art binary neural networks.
4.4 Comparison with State-of-the-art Methods
Binary neural networks make an amazing progress when recently achieving impressive results. However, prior works improve binary models through training process for representation learning while the architecture design should has great influence as well. Our proposed method using evolutionary search based on recent ideas of one-shot neural architecture search aims at exploring the group architecture design for BNNs improvement.
In this section, to evaluate the proposed method we compare our BNNs with several recent works: Binary Connect , BNNs , ABC-Net , DoReFa-Net , XNOR-Net , etc. The metrics reported are Top-1, Top-5 accuracy on ImageNet, and the number of FLOPs. BiReal-Net and CI-BCNN are two prominent works achieving good results. These networks binarize ResNet  with efficient skip connection module. Here, we only consider ResNet 18 layers versus our 13 layers for fair comparison. CI-BCNN  is the state-of-the-art binary model (both weights and activations are binarized) as it is able to achieve Top-1 accuracy on ImageNet with very small cost of FLOPs. Our binary model using Module 1 outperforms the MoBiNet  and the Bi-RealNet-18  Top-1 accuracy with less computational cost. Moreover, it also surpasses CI-BCNN  Top-1 accuracy with lower number of FLOPs (ours: , CI-BCNN: ). Also, our Module 2 and Module 3 also transcends the BiReal-Net  by requiring a significant lower number of FLOPs.
On the other hand, our method significantly outperforms the other binary neural networks regarding the accuracy and computation metric. The accuracy results are reported in the Table 3. Our proposed binary networks are better than most of the prior works. For computational cost, Table 1 indicates comparisons in terms of number of FLOPs and memory usage.
|Binary Connect ||1/32||35.40||61.00|
|CI-BCNN-18 (add) ||1/1||59.90||84.18|
In this section, we discuss the analysis of results and computational complexity. For ablation study in Section 4.2, the results of group search architecture are more stable and have higher accuracy than naively erratic groups, proving that having heterogeneous group structure at each layer in MobileNet architecture yields good performance. In addition, group convolution is flexible to increase or decrease the number of connections in selective layers. For example when observing the first layers, we realize that the search algorithm tends to assign small number of groups to preserve essential information of the inputs. Meanwhile, the algorithm diminishes insignificant inter-channel connections (i.e., by increasing the number of groups) to enhance the model’s compactness and efficiency.
Efficient group design for BNNs can yield good outcomes. We introduced a novel algorithm via evolutionary search to explore group structures aiming at optimizing the trade-off when either using depth-wise or fully convolutional layers in MobileNet. Our BNN is efficient as it achieves highly accurate results while saving the computational cost (only single GPUs for searching) in dealing with challenging visual classification tasks.
Acknowledgements. We thanks all anonymous reviewers for constructive and valuable feedback. The code will be available at link
-  Yiwen Guo Lin Xu Yurong Chen Aojun Zhou, Anbang Yao. Incremental network quantization: Towards lossless cnns with low-precision weights. In International Conference on Learning Representations,ICLR2017, 2017.
-  Joseph Bethge, Marvin Bornstein, Adrian Loy, Haojin Yang, and Christoph Meinel. Training competitive binary neural networks from scratch. ArXiv e-prints, 2018.
-  Adrian Bulat and Georgios Tzimiropoulos. Xnor-net++: Improved binary neural networks. In BMVC, 2019.
-  Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
Xin Chen, Lingxi Xie, Jun Wu, and Qi Tan.
Progressive differentiable architecture search: Bridging the depth
gap between search and evaluation.
The International Conference on Computer Vision (ICCV), October 2019.
-  Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun. Detnas: Backbone search for object detection. In NeurIPS 2019, 2019.
-  Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, abs/1602.02830, 2016.
-  Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NIPS, pages 3123–3131, 2015.
-  Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-fei. Imagenet: A large-scale hierarchical image database. In In CVPR, 2009.
-  Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Hang Su, and Jun Zhu. Stochastic quantization for learning accurate low-bit deep neural networks. International Journal of Computer Vision, Mar 2019.
-  Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019.
-  Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015.
-  Song Han, Jeff Pool, John Tran, and William J. Dally. Learning both weights and connections for efficient neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPS’15, pages 1135–1143, Cambridge, MA, USA, 2015. MIT Press.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
-  Koen G. Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng, and Roeland Nusselder. Latent weights do not exist: Rethinking binarized neural network optimization. ArXiv, abs/1906.02107, 2019.
-  Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
-  Gao Huang, Shichen Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Condensenet: An efficient densenet using learned group convolutions. In The IEEE Conference on CVPR, June 2018.
-  Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In NIPS, pages 4107–4115, 2016.
-  Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4107–4115. Curran Associates, Inc., 2016.
-  Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and ¡1mb model size. CoRR, abs/1602.07360, 2017.
-  Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
-  A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
-  Yann Lecun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
-  Quanquan Li, Shengying Jin, and Junjie Yan. Mimicking very efficient network for object detection. 2017 IEEE Conference on CVPR, pages 7341–7349, 2017.
-  Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 345–353. Curran Associates, Inc., 2017.
-  Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, Computer Vision – ECCV 2018, pages 19–35, Cham, 2018. Springer International Publishing.
-  Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
-  Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2755–2763, 2017.
-  Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. In The International Conference on Computer Vision (ICCV), October 2019.
-  Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In International Conference on Learning Representations, 2019.
-  Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In ECCV, 2018.
-  Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In The European Conference on Computer Vision (ECCV), September 2018.
-  Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, and Babak Hodjat. Chapter 15 - evolving deep neural networks. In Robert Kozma, Cesare Alippi, Yoonsuck Choe, and Francesco Carlo Morabito, editors, Artificial Intelligence in the Age of Neural Networks and Brain Computing, pages 293 – 312. Academic Press, 2019.
-  Wojciech Mula, Nathan Kurz, and Daniel Lemire. Faster population counts using AVX2 instructions. CoRR, abs/1611.07612, 2016.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y.
Reading digits in natural images with unsupervised feature learning.
NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean.
Efficient neural architecture search via parameters sharing.
Proceedings of the 35th International Conference on Machine Learning, pages 4095–4104, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
-  Hai Phan, Dang Huynh, Yihui He, Marios Savvides, and Zhiqiang Shen. Mobinet: A mobile binary network for image classification. In The IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020.
-  Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV (4), volume 9908 of Lecture Notes in Computer Science, pages 525–542. Springer, 2016.
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le.
Regularized evolution for image classifier architecture search.In The AAAI Conference on Artificial Intelligence, 2018.
-  Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V. Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pages 2902–2911. JMLR.org, 2017.
-  Anat Caspi Linda Shapiro Sachin Mehta, Mohammad Rastegari and Hannaneh Hajishirzi. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation. In ECCV, 2018.
-  Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Inverted residuals and linear bottlenecks: Mobile networks for classification, detection and segmentation. CoRR, abs/1801.04381, 2018.
-  L. Sifre. Rigid-motion scattering for image classification, 2014.
Ziwei Wang, Jiwen Lu, Chenxin Tao, Jie Zhou, and Qi Tian.
Learning channel-wise interactions for binary convolutional neural
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
-  L. Xie and A. Yuille. Genetic cnn. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1388–1397, Oct 2017.
-  Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
-  Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hutter. NAS-Bench-101: Towards Reproducible Neural Architecture Search. arXiv e-prints, Feb 2019.
-  Jiahui Yu and Thomas S. Huang. Universally slimmable networks and improved training techniques. ArXiv, abs/1903.05134, 2019.
-  Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas S. Huang. Slimmable neural networks. CoRR, abs/1812.08928, 2018.
-  Jianhao Zhang, Yingwei Pan, Ting Yao, He Zhao, and Tao Mei. dabnn: A super fast inference framework for binary neural networks on arm devices. In Proceedings of the 27th ACM International Conference on Multimedia, pages 2272–2275, 2019.
-  Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018.
-  Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR, abs/1606.06160, 2016.
-  Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016.
-  Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017.