Deep neural networks have demonstrated impressive accuracy in many computer vision applications such as image classification, object detection and recognition, semantics segmentation, etc. However, their increasing computation cost leads to the requirement of high-end devices such as GPU for real-time inference. It has been a challenging task to deploy the deep network based face detector on the edge devices due to their limited resources (e.g. memory size and computation power). To deploy the deep models on the edge devices, lots of approaches have been proposed, such as network pruning[8, 14], efficient architecture design (e.g. MobileNet [9, 29]) and quantized networks [23, 1]. Especially, for the embedded devices, quantized networks are particularly attractive because of their impressive compression ratio (e.g. 32 times savings on model size) and easy conversion for fixed-point representation. For instance, IFQ-Net  designs a tiny fixed-point face detector (240.9 KB) through slimming, quantizing and fixed-point converting the layers of Tiny-YOLO network . Even though the fixed-point converting is lossless, the accuracy drop caused by slimming and quantizing is still notable.
In this paper, to compress the model size and meanwhile improve the accuracy of a quantized network, we propose DupNet as shown in Figure 1. Firstly, to compress the model size, DupNet employs weights with duplicated channels for weight-intensive layers which usually are the upper layers. On the other hand, to improve the accuracy, DupNet duplicates the input feature maps and thus uses more weights channels for the quantization-sensitive layers (usually locate in the lower part of the network) whose quantization causes significant accuracy drop. In details, to further compress a quantized network, we force the weights of weight-intensive layers to have identical channels. As shown in Figure 2, to generate such identical channels, we employ template weights which has less channels than input feature maps and duplicate it to for proper convolution. During inference time, can be restored from the template weights . Consequently, model size savings can be easily achieved by only storing .
Another major issue of the quantized network is that the accuracy is usually downgraded by a large margin. For example, in XNOR-Net, binarizing both weights and feature maps of AlexNet 
leads to 12% accuracy drop on ImageNet dataset. In the case of face detection, we observe that the accuracy drop is mainly caused by the quantization of several specific layers, named as quantization-sensitive layers (see Section 4.2). Usually, they are the lower layers which only have small amount of output feature maps. Thus, quantizing them into extremely low-bits severely harms the representative power of the output features. To address the problem, one may simply employ more feature maps or quantize them into higher bits, both of which would increase the memory usage on feature maps.
In this paper, we propose to further duplicate the input feature maps of the quantization-sensitive layers to improve its accuracy (Figure 2). It is true that simply duplicating the feature maps does not introduce extra information. However, it allows us to use weights with more channels (not identical) for convolving more representative outputs. The advantage of our method is that it does not require extra memory on feature maps which is a critical issue for the embedded devices. Nevertheless, it does increase the memory usage on weights. However, as will be demonstrated in Section 4.2, we experimentally found out that the quantization-sensitive layers are usually the lower layers of a network which only have small amount of weights. Consequently, such memory increase does not affect the network much.
In summary, we propose DupNet which employs duplicated weights for the weight-intensive layers and duplicates the input feature maps for the quantization-sensitive layers of a quantized network. The benefits of our proposal are two-folds: 1) it reduces the model size of a quantized network by duplicated weights for weight-intensive layers; 2) it increases the accuracy through duplicating the input feature maps of its quantization-sensitive layers. Based on the DupNet, we design a very tiny quantized CNN with impressive improvement on accuracy for face detection. The model size of our network is only 36.9 KB which is the tiniest deep learning based face detector to our best knowledge.
2 Related Work
2.1 Face Detection
Two main approaches, namely one-stage and two-stage methods, have been successfully inherited from object detection domain for face detection. Two-stage methods follow a common two steps pipeline: 1) generates a set of region proposals with their local features; 2) pass them to a network for classifying detected objects and regressing their bounding boxes. For example, Faster-RCNN proposes an efficient Region Proposal Network (RPN) to generate region proposals and then use Fast-RCNN network to refine the proposals. To improve the speed of Faster R-CNN, RFCN  proposes to share RPN network and Fast-RCNN network. In order to further improve the speed, Li et al.  proposes Light Head RCNN, which employs light weight head network to reduce the computation complexity. To speedup the R-FCN network for detecting 3000 object classes, Singh et al.  propose to only employ position-sensitive feature maps for several predefined super-classes.
On the other hand, one-stage approaches usually employ a single network to classify and regress the objects [18, 25, 26]and thus usually can run faster. For example, YOLO  predicts 2 bounding boxes in each of the grids for VOC object detection . Furthermore, YOLOv2  employs fully convolution network that results in grids (m, n are the width and height of the output feature) and uses predefined anchors to better predict the bounding boxes of the objects. In , Li et al. propose a backbone network to improve the accuracy by maintaining high resolution for feature maps and reduce the computation complexity by decreasing the width of upper layers.
In spite of the enormous progresses for reducing the complexity of two-stage methods, such region proposal based frameworks may be expensive for embedded devices because they usually need to store the features from previous layers. Therefore, following , we employ the widely used one-stage pipeline YOLOv2 for our face detector.
2.2 Deep Network Compression
To reduce the computation cost of the deep models, many approaches have been studied. One way is to design novel efficient architectures. For example, by replacing a standard convolution layer by the combination of a depth-wise and a point-wise (11) convolution layer, MobileNets  reduces the weights and computation by 89 times. Similarly, LBCNN  employs predefined binary patterns for the depthwise convolution and shares those patterns over multiple layers for further compression.
Another direction for designing a compact model is to compress the network through pruning, quantization, etc. Pruning methods eliminate the less important connections and fine-tune the pruned network to narrow down the accuracy drop. For example, in , Wei et al. reduce the input and output channels of each layer of VGG by 32 times and design a very small detector whose size is only 132KB. In contrast, quantization approaches aim to quantize the float data of a network into low-bits data. For example, XNOR-Net  and HWGQ-Net  achieves 32 times savings on model size via binarizing (1-bit) the network weights. In addition to the quantized weights, further quantizing feature maps into low-bits data can reduce the feature maps memory usage and meanwhile increase the inference speed. For example, XNOR-Net which quantizes both weights and feature maps into 1-bit is theoretically 64 times faster than its full precision counterpart. Furthermore, for embedded devices such as FPGA and ASIC, quantization network is particularly attractive because it leads to higher throughput and lower power consumption through converting the network into a fixed-point one.
One interesting topic is about further exploring the redundancy and compressing the quantized network. For example, in , various networks (VGG16, MobileNet) are firstly quantized to 8-bit data and then further pruned by 24%. Similarly, Li and Ren  explores the redundancy of a Binarized Neural Network (BNN) and further compresses the model size by 3.9 times through bit-level data pruning.
Different with methods that explore the redundancy through carefully tuned strategies, we propose to simply employ duplicated weights which contain lots of identical channels for the weight-intensive layers. Since the duplicated weights can be easily restored from the template weights which contain all the non-identical channels, it is sufficient to only store the template weights in the memory during inference time.
2.3 Accuracy Improvement for Quantized Network
Even though quantizing the network into low-bits data leads to promising reduction on computation cost, accuracy drop is usually observed. As demonstrated in [10, 19], quantizing the network data into 8-bits only leads to minor accuracy drop on ImageNet classification task . Nevertheless, quantizing the network into lower bits usually results in notable accuracy degradation. For example, XNOR-Net  which quantizes both its weights and feature maps into 1-bit and thus observes a 12.6% accuracy drop (56.8% vs. 44.2%). Based on that, HWGQ-Net  gains 8.2% accuracy back through using 2-bits on its feature maps (52.4% on ImageNet). Additionally, for object detection tasks, 3%5% drop is observed in .
To improve the accuracy of quantized networks, lots of efforts have been done on better strategies for training the networks. INQ  proposes to incrementally quantize the weights and achieves more accurate quantized networks through iterative fine-tuning. Similarly, in , the weights and activations are firstly quantized to 16-bits, then to 4-bits and at the end to 2-bits. PACT  optimizes the clipping thresholds for better quantization on feature maps. Besides, knowledge distillation technology additionally uses the knowledge from teacher network to guide the training process of student network .
To narrow down the accuracy drop caused by the network quantization, we propose to duplicate the feature maps of its quantization-sensitive layers which allows us to use weights with more channels for convolving more representative features. The advantage of our method is that it gives significant accuracy improvement without increasing the feature maps memory usage.
3 Our Approach: DupNet
To further compress the model size and improve the accuracy of a quantized network for face detection, we propose to employ weights with duplicated channels in the weight-intensive layers and duplicate the input feature maps of its quantization-sensitive layers.
3.1 Duplicated Weights for Model Compression
As discussed in , even though the network data is quantized into very low-bits data, redundancy still exists. In this section, we will illustrate our method which employs template weights with less channels and thus less redundancy. During convolution process, we duplicate the template weights to get required channels to convolve with the input feature maps.
We assume the quantized network only employs convolution which means that the input feature maps and weights are quantized to 2-bits and 1-bit respectively. We represent its weights and feature maps as and , where , , are the number of channels, the width and height of input feature maps, is the kernel size of the convolution111Even though we use for explanation, our method is also able to compress the convolution with other kernel size (e.g. and ). . To compress the model size, we define a weights template which has less channels (). However, it can not be used to convolve with X since they have different number of channels. To solve such problem, we duplicate the channels of template weights into required number and obtain duplicated weights .
During training process, it is straightforward to compute the gradient of duplicated weights by employing standard convolution, where represents the loss of the network for given training samples. As shown in Figure 2, to compute the gradient of template weights , we average the corresponding channels of the . Specifically, we assume that and for 4 times compression, and the 0th, 128th, 256th and 384th channels of are duplicated from the 0th channel of . In order to compute the 0th channel of the gradient , we element-wisely average the 0th, 128th, 256th and 384th channels of . Such gradient averaging process is repeated for computing the 1127th channels of . At the end, the template weights can be learned by iteratively updating it with its gradients using SGD optimization.
Since the duplicated weights can be easily restored from the templates weights , it is only necessary to store . Thus, times model size reduction can be achieved. Nevertheless, our compression method may harm the accuracy because the duplicated weights contains less non-identical channels. Considering that the model size is usually dominated by the weight-intensive layers, we only apply our compression method on these layers to prevent significant accuracy drop.
Furthermore, given the fact that many channels of the duplicated weights are identical, we can reduce the computation complexity as follows. We split duplicated weights into while each of them is identical with the template weights . Similarly, the feature maps X can also be accordingly split into which are non-identical. We use and to represent convolution operation and a function that concatenate its members along channel axis respectively. Then, . Consequently, the convolution can be alternatively computed by , where . The overall computation complexity of and is much smaller than .
3.2 Duplicate Feature Maps to Improve Accuracy
The quantized networks quantize their full precision data into low-bits data thus usually leads to notable accuracy drop. In the following, we further improve the degraded accuracy for a very tiny quantized face detector.
As discussed in Section 1, the accuracy degradation of quantized face detector is mainly caused by the weak representation power of the quantized output feature maps of its quantization-sensitive layers. To enhance the representative power of their output features, one straightforward way is to simply make these layers wider (more input feature maps). However, it significantly increases the memory usage on feature maps. Besides, observing that these layers usually locate in the lower part of a network, such memory increase may cause critical issue because their feature maps hold high resolution (). In contrast, the number of channels () is usually small and thus their weights () is small too. Consequently, we propose to duplicate the input feature maps and employ more weights channels for better output features. As shown in Figure 2, the input feature maps are duplicated 4 times and thus the weights size is also increased to . For the backward pass during training time, to obtain the gradients , we firstly compute the gradients of duplicated feature maps , and then average every 4 of its corresponding channels that are identical in . At the end, the gradients are propagated back to its previous layers.
Comparing with the strategy that simply uses more input feature maps, our method does not require extra memory for feature maps. Thanks to the increased channels of input feature maps, we can employ more channels of weights ( vs. ). Consequently, the input feature maps are convolved with more patterns and thus leads to more representative power on the resulted features. Even though our method increases the weights size, such cost increase has limited influence on the overall model cost because the weights size of these layers are usually very small (see Table 1). Similar with the theory that is explained in Section 3.1, one also can achieve speedup by replacing with , where can be obtained by summing the corresponding channels of W.
4 Experimental Results
To design a very tiny CNN for face detection, we borrow the compression ideas from IFQ-Tinier-YOLO  which compresses Tiny-YOLO network by 260 times through halving the filter numbers of all convolution layers, replacing one 33 layer which contains massive parameters by 11 kernels and binarizing the weights in all layers. Moreover, we further halve their filter number and apply the proposed duplicated weights for its weight-intensive layers (Conv6Conv8) and achieve 6.7 times further savings on model size. Besides, we will demonstrate that duplicating the input feature maps of its quantization-sensitive layers can significantly improve the accuracy.
We employ WiderFace  training images to train our models using Darknet framework . For fair comparison, all the models are trained with the same strategies which are: 1) training the models by 100k iterations with SGD optimization method; 2) the learning rate is initially set to 0.01 and downscaled by a factor of 0.1 at the -th, -th, -th and -th iteration respectively; 3) all the models are trained from scratch. Furthermore, we use FDDB  benchmark which contains 5,171 faces within 2,845 test images to evaluate the accuracy of our face detectors. Inheriting from 
, we use the detection rate when 284 false positive faces are reached (averagely allowing 1 false positive in every 10 images) as the evaluation metric.
4.1 Model Compression
To further compress model size of IFQ-Tinier-YOLO, we analyze the weights size for each of its layer. Meanwhile, to measure the computation complexity, we borrow the term #FLOPs222As stated in , for the 64-bit based computing devices, 64 Multi-Adds of the convolution are equivalent to 1 FLOP. Similarly, we assume that 32 Multi-Adds of the convolution and 8 Multi-Adds of the (Conv1) equal to 1 FLOP respectively. (Floating-point operations) which is generally used for full precision networks . Nevertheless, it is worthy to point out that our network can be lossless converted to fixed-point network and thus does not require any floating-point operation.
|Kernel size (W)||Feature size (X)||#FLOPs (million)||Weights size (KB)|
As shown in Table 1, the weight-intensive layers of IFQ-Tinier-YOLO model are Conv6Conv8 layers. Consequently, to further compress the model size, we apply two techniques for these layers: halving the filters number and employing duplicated weights. Regarding to the duplicated weights, we casted experiments that employ 2 or 4 or 8 times duplication to figure out the optimal trade-off between high compression ratio and low detection rate.
We first halve the filter number of the weight-intensive layers and thus reduce its model size from 240.9 KB to 82.4 KB (marked as “1” in Figure 3). Meanwhile, it achieves 0.837 on detection rate which is very close to IFQ-Tinier-YOLO (0.84 ). Additionally, as shown in Figure 3, employing 2 or 4 times duplicated weights gives further reduction on model size without detection rate drop. More specifically, with the help of halved filter number and 4 times duplicated weights, we reduce the model size of IFQ-Tinier-YOLO from 240.9 KB to 35.9 KB indicating a 6.7 times reduction. Furthermore, when compressing the Conv6Conv8 by 8 times, the accuracy only decreases by 2.1% while the compression ratio increases to 8.5 times.
The reason that our compression method does not give notable accuracy drop is that the redundant connections exist in those three layers. However, one may argue that further reducing the number of their filters also can reduce the model size. Consequently, we compare such method (marked as “Filter slimming”) with our method in Figure 4. For fair comparison, we reduce the filter numbers of Conv6Conv8 to make them have similar model size with our duplicated weights models. For example, to compare with our model with 4 times compression on all Conv6Conv8 layers, we instead halve their filter numbers resulting in a 2, 4 and 2 times compression for these three layers respectively. As demonstrated in Figure 4, for different compression ratios, our weights duplication based method generally outperforms the filter slimming method.
In the above, we have demonstrated that our duplicated weights based compression is very effective for the quantized network whose precision is . To demonstrate the generalization ability of our method, we further test it on the networks that are quantized into different precisions. As shown in Table 2, our method with 4 times weights duplication also gives no accuracy drop for the , and networks. When further compressing them by 8 times, slight degradation is observed. One interesting observation is that the higher precision the network is, the less accuracy drop is caused. For example, with 8 times compression, the detection rate drop for network is 3.8% while it is only 2.0% for and 0.6% for . We think it is because that the more accurate the network is, the more redundancy usually exists in its connections.
|Weights duplication||Network precision|
4.2 Accuracy Improvement
The accuracy of quantized networks usually is notably lower than their full precision counterparts. For example, the quantized network based face detector, IFQ-Tinier-YOLO leads to 6% drop on detection rate . On the other hand, quantizing different layers leads to widely-varied performance loss . To improve the accuracy, we first locate the quantization-sensitive layers of a quantized face detector through layer-wise quantization strategy.
|Quantized Conv. layers||#FLOPs (million)||Model size(KB)||Detection rate|
As shown in Table 3, we firstly quantize Conv4Conv8 convolution layers of a full precision counterpart of IFQ-Tinier-YOLO network but with halved filter number in Conv6Conv8. In this subsection, to demonstrate the accuracy improvement effect of duplicating the input feature maps, the duplicated weights based compression is not applied. As shown in Table 3, quantizing Conv4Conv8 leads to 3.2 and 7.2 times reduction on MFLOPs (million of FLOPs) and model size respectively while the detection rate only drops by 2.2%. Nevertheless, progressively quantizing the Conv3Conv2 and then Conv1 causes 2.2%, 1.3% accuracy drop respectively but gives much less reductions on inference cost. Thus, we define the Conv1Conv3 as quantization-sensitive layers of the network. We think the reason is that they only contain limited number of feature maps. Consequently, quantizing them severely damages the representative power of their output features. At the end, quantizing Conv9, resulting in a fully quantized Tinier-YOLO model, further gives remarkable savings on computation cost while the accuracy is only decreased by 0.8%.
|Feature maps duplication?||#FLOPs (million)||Model size(KB)||Detection rate|
To improve the accuracy of the fully quantized Tinier-YOLO, we firstly duplicate the input feature maps of Conv2 and Conv3 by 4 and 2 times respectively. As shown in Table 4, it gives 3.0% increase on detection rate while the model size and computation complexity are only increased by 1.2% and 27.0% respectively. Furthermore, additionally duplicating the feature maps of Conv1 by 4 times gives 0.5% increase on detection rate while the model size only increases 0.1KB. However, its computation complexity is increased from 62.6 MFLOPs to 92.6 MFLOPs (47.9% increase). At the end, we further duplicate the feature maps of Conv9 by 2x and achieve 1.8% improvement on detection rate at the price of 3.3% and 10.1% increase on #FLOPs and model size respectively.
|Quantization precision||Detection rate|
On the other hand, employing more bits for the weights of quantization-sensitive layers can also improve the accuracy. For fair comparison, in the case of 4 times duplication (e.g. Conv2), we use 3-bits on weights (“”) to compare it with our method (“”) which can be computed by where 333Each elements of is the summation of four binary elements (either -1 or +1) from four corresponding channels (see Section 3.2). that can be represented using 3-bits. As shown in Table 5, our methods generally gives more than improvement on detection rate. Furthermore, our method is more attractive for hardware design in three aspects: 1) it use less information (only 5 values vs. 8 values) which makes the coding-based further compression easier (e.g. Huffman coding  and RLC ); 2) lots of its weights are 0 thus the corresponding computation can be optimized; 3) our model can be computed only using convolution444The convolution (Conv1) can be computed by the accumulation of four convolutions. which can make the hardware design simpler.
4.3 Face Detectors Comparison
As demonstrated in the previous experiments, employing duplicated weights gives remarkable compression without obvious accuracy drop. On the other hand, duplicating the feature maps for the quantization-sensitive layers improves the detection rate by a large margin. In this section, we combine these two technique to design DupNet-Tinier-YOLO which is a very tiny quantized face detector with improved accuracy. In details, we employ 4 times compression for weight-intensive layers (Conv6Conv8) and duplicate the input feature maps of Conv2Conv3. We initially choose not to duplicate the input feature maps of Conv1 in DupNet-Tinier-YOLO to avoid notable increase on #FLOPs. Regarding to the model size, 4 times weights compression reduces the model size from 82.4KB to 35.9 KB and duplicating the input feature maps increase it to 36.9 KB.
|Models||#FLOPs (million)||Model size(KB)||Detection rate|
As shown in Table 6, comparing with the IFQ-Tinier-YOLO , our DupNet-Tinier-YOLO (represented as “DupNet”) gives 6.5 times savings on model size and 42.0% less MFLOPs. Meanwhile, it also gives 2.4% improvement on detection rate. To further improve the detection rate with acceptable cost increase, we design DupNet-Tinier-YOLO-L (marked as “DupNet-L”) which additionally duplicates the input feature maps fo Conv1 and Conv9 by 4 and 2 respectively. As shown in Table 6, DupNet-Tinier-YOLO-L further gives 2.5% higher detection rate. Nevertheless, the model size and #FLOPs are increased to 45.4 KB and 95.7 MFLOPs respectively, both of which are still smaller than IFQ-Tinier-YOLO.
Furthermore, we employ PACT  to train optimal clipping thresholds for feature maps quantization to improve the accuracy. As illustrated in Table 6, PACT algorithm improves the DupNet-Tinier-YOLO and DupNet-Tinier-YOLO-L by 2.1% and 2.2% respectively. Comparing with Tiny-YOLO network, our DupNet-Tinier-YOLO achieves 389.9 and 1694.2 times reduction on #FLOPs and model size respectively while the detection rate is only decreased by 4.0%. On the other hand, the accuracy of DupNet-Tinier-YOLO-L is only decreased by 1.4% while the inference cost reduction is kept impressive. Besides, we compare their accuracy in terms of ROC curves in Figure 5.
To demonstrate the performance of our detector on more challenging faces, we also test DupNet-Tinier-YOLO-L on WiderFace testing dataset . As shown in Figure 6, our model also gives excellent detection quality in various challenging scenarios such as tiny size, low-illumination, severe occlusion and degraded coloring, etc.
In summary, we proposed DupNet-Tinier-YOLO face detector, which is quantized, very tiny and accurate. By employing duplicated weights for the weight-intensive layers, we reduced the model size and #FLOPs of IFQ-Tinier-YOLO by 6.5 times and 42.0% respectively. Meanwhile, we increased its detection rate by 4.5% by using the proposed DupNet and the PACT  technique. Moreover, we demonstrated that our DupNet can be flexibly adjusted for different inference cost (e.g. DupNet-Tinier-YOLO-L has higher cost and is more accurate).
In this paper, we proposed DupNet which employs duplicated weights for the weight-intensive layers of a quantized CNN to compress its model size. Furthermore, we observe that the degraded accuracy of the quantized CNN is mainly caused by quantization-sensitive layers which have poor representative power on their quantized output feature maps. Hence, DupNet also duplicates the input feature maps of these layers and employ more weights channels to improve their output features. Through the experiments on FDDB dataset, we demonstrated that our DupNet-Tinier-YOLO face detector can significantly compress the model size and meanwhile impressively improve the detection rate. Moreover, our DupNet-Tinier-YOLO face detector can be lossless converted into fixed-point network  and thus can be easily implemented on embedded devices.
Additionally, our DupNet can be combined with other algorithms that are proposed to improve the performance of compressed networks such as knowledge distillation. Moreover, despite we only test our method on face detection, it is also applicable for other tasks such as object detection or even face recognition or semantic segmentation, etc.
Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos.
Deep learning with low precision by half-wave gaussian quantization.
IEEE Conference on Computer Vision and Pattern Recognition,CVPR, pages 5406–5414, 2017.
Yu-Hsin Chen, Tusha Krishna, Joel S. Emer, and Vivienne Sze.
Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks.IEEE Journal of Solid-State Circuits, 52(1):127–138, 2017.
-  Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. PACT: Parameterized clipping activation for quantized neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-FCN: Object detection via region-based fully convolutional networks. In Annual Conference on Neural Information Processing Systems, NIPS, 2016.
-  Mark EveringHam, S.M. Ali Eslami, Luc Van Gool, Christopher K.I. Williams, John Winn, and Andrew Zisserman. The pasacal visual object classes challenge: A retrospective. International Journal of Computer Vision, 111(1):98–136, 2015.
-  Hongxing Gao, Wei Tao, Dongchao Wen, TseWei Chen, Kinya Osa, and Masami Kato. IFQ-Net: Integrated fixed-point quantization networks for embedded vision. In IEEE Embedded Vision Workshop, CVPRW, 2018.
-  Song Han, Huizi Mao, and William J. Dally. Deep Compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations, ICLR, 2016.
-  Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Annual Conference on Neural Information Processing Systems, NIPS, 2015.
-  Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2017.
-  Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Vidit Jain and Erik Learned-Miller. FDDB: A benchmark for face detection in unconstrained settings. Technical Report UM-CS-2010-009, University of Massachusetts, Amherst, 2010.
-  Felix Juefei-Xu, Vishnu Naresh Boddeti, and Marios Savvides. Local binary convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2017.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffery Hinton. Imagenet classification with deep convolutional neural networks. In Annual Conference on Neural Information Processing Systems, NIPS, 2012.
-  Hao Li, Asim Kadav, Igor Durdannovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. In International Conference on Learning Representations, ICLR, 2017.
-  Yixing Li and Fengbo Ren. Building a compact binary neural network through bit-level sensitivity and data. ArXiv e-prints, 2018.
-  Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, and Jian Sun. DetNet: a backbone network for object detection. In IEEE European Conference on Computer Vision, ECCV, 2018.
-  Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, and Jian Sun. Light-Head R-CNN: In defense of two-stage object detector. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: single shot multibox detector. In IEEE European Conference on Computer Vision, ECCV, pages 21–37, 2016.
-  Szymon Migacz. 8-bit inference with TensorRT. http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf, May 2017.
-  Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. In International Conference on Learning Representations, ICLR, 2017.
-  Mi Sun Park, Xiaofan Xu, and Cormac Brick. SQuantizer: Simultaneous learning for both sparse and low-precision neural networks. ArXiv e-prints, 2018.
-  Qing Qin, Jie Ren, Jialong Yu, Ling Gao, Hai Wang, Jie Zheng, Jianbin Fang, and Zheng Wang. To compress, or not to compress: Characterizing deep learning model compression for embedded inference. ArXiv e-prints, 2018.
-  Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net: imagenet classification using binary convolutional neural networks. In IEEE European Conference on Computer Vision, ECCV, pages 525–542, 2016.
Darknet: Open source neural networks in C.http://pjreddie.com/darknet/, 2013–2016.
-  Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In IEEE International Conference on Computer Vision and Pattern Recognition, CVPR, pages 779–788, 2016.
-  Joseph Redmon and Ali Farhadi. YOLO9000: better, faster, stronger. In IEEE International Conference on Computer Vision and Pattern Recognition, CVPR, pages 6517–6525, 2017.
-  Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Analysis and Machine Intelligence, 39(6):1137–1149, 2017.
-  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  Mark Sandler, Andrew Howard, Menglong Zhu andAmdrey Zhmoginov, and Liang-Chieh Chen. MobileNetV2: Inverted residual and linear bottlenecks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Bharat Singh, Hengduo Li, Abhishek Sharma, and Larry S. Davis. R-FCN-3000: decoupling detection and classification. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Yap June Wai, Zulkalnain bin Mohd Yussof, Sani Irwan bin Salim, and Lim Kim Chuan. Fixed point implementation of Tiny-YOLO-v2 using OpenCL on FPGA. International Journal of Advanced Computer Science and Applications (IJACSA), 9(10):506–512, 2018.
-  Yi Wei, Xinyu Pan, Hongwei Qin, Wanli Ouyang, and Junjie Yan. Quantization mimic: Towards very tiny cnn for object detection. In IEEE European Conference on Computer Vision, ECCV, 2018.
-  Shuo Yang, Ping Luo, Chen Change Loy, and Xiaoou Tang. WIDER FACE: A face detection benchmark. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2016.
-  Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. In International Conference on Learning Representations, ICLR, 2017.
-  Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Towards effective low-bitwidth convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
-  Lixue Zhuang, Yi Xu, Bingbing Ni, and Hongteng Xu. Flexible netwrk binarization with layer-wise priority. In IEEE International Conference on Image Processing, ICIP, 2018.