Given the advent of Graphics Processing Units (GPUs), deep convolutional neural networks (CNNs) with billions of floating number multiplications could receive speed-ups and make important strides in a large variety of computer vision tasks,e.g. image classification [26, 17], object detection , segmentation , and human face verification . However, the high-power consumption of these high-end GPU cards (e.g. 250W+ for GeForce RTX 2080 Ti) has blocked modern deep learning systems from being deployed on mobile devices, e.g. smart phone, camera, and watch. Existing GPU cards are far from svelte and cannot be easily mounted on mobile devices. Though the GPU itself only takes up a small part of the card, we need many other hardware for supports, e.g. memory chips, power circuitry, voltage regulators and other controller chips. It is therefore necessary to study efficient deep neural networks that can run with affordable computation resources on mobile devices.
Addition, subtraction, multiplication and division are the four most basic operations in mathematics. It is widely known that multiplication is slower than addition, but most of the computations in deep neural networks are multiplications between float-valued weights and float-valued activations during the forward inference. There are thus many papers on how to trade multiplications for additions, to speed up deep learning. The seminal work  proposed BinaryConnect to force the network weights to be binary (e.g.-1 or 1), so that many multiply-accumulate operations can be replaced by simple accumulations. After that, Hubara et al. 
proposed BNNs, which binarized not only weights but also activations in convolutional neural networks at run-time. Moreover, Rastegariet al.  introduced scale factors to approximate convolutions using binary operations and outperform [15, 22] by large margins. Zhou et al.  utilized low bit-width gradient to accelerate the training of binarized networks. Cai et al.  proposed an half-wave Gaussian quantizer for forward approximation, which achieved much closer performance to full precision networks.
Though binarizing filters of deep neural networks significantly reduces the computation cost, the original recognition accuracy often cannot be preserved. In addition, the training procedure of binary networks is not stable and usually requests a slower convergence speed with a small learning rate. Convolutions in classical CNNs are actually cross-correlation to measure the similarity of two inputs. Researchers and developers are used to taking convolution as a default operation to extract features from visual data, and introduce various methods to accelerate the convolution, even if there is a risk of sacrificing network capability. But there is hardly no attempt to replace convolution with another more efficient similarity measure that is better to only involve additions. In fact, additions are of much lower computational complexities than multiplications. Thus, we are motivated to investigate the feasibility of replacing multiplications by additions in convolutional neural networks.
|(a) Visualization of features in AdderNets||(b) Visualization of features in CNNs|
In this paper, we propose adder networks that maximize the use of addition while abandoning convolution operations. Given a series of small template as “filters” in the neural network, -distance could be an efficient measure to summarize absolute differences between the input single and the template as shown in Figure 1. Since subtraction can be easily implemented through addition by using its complement code, -distance could be a hardware-friendly measure that only has additions, and naturally becomes an efficient alternative of the convolution to construct neural networks. An improved back-propagation scheme with regularized gradients is designed to ensure sufficient updates of the templates and a better network convergence. The proposed AdderNets are deployed on several benchmarks, and experimental results demonstrate AdderNets’ advantages in accelerating inference of deep neural networks while preserving comparable recognition accuracy to conventional CNNs.
This paper is organized as follows. Section 2 investigates related works on network compression. Section 3 proposes Adder Networks which replace the multiplication in the conventional convolution filters with addition. Section 4 evaluates the proposed AdderNets on various benchmark datasets and models and Section 5 concludes this paper.
2 Related works
To reduce the computational complexity of convolutional neural networks, a number of works have been proposed for eliminating useless calculations.
2.1 Network Pruning
Pruning based methods aims to remove redundant weights to compress and accelerate the original network. Denton et al. 
decomposed weight matrices of fully-connected layers into simple calculations by exploiting singular value decomposition (SVD). Hanet al.  proposed discarding subtle weights in pre-trained deep networks to omit their original calculations without affecting the performance. Wang et al. 
further converted convolution filters into the DCT frequency domain and eliminated more floating number multiplications. In addition, Huet al.  discarded redundant filters with less impacts to directly reduce the computations brought by these filters. Luo et al.  discarded redundant filters according to the reconstruction error. He et al. 
utilized a LASSO regression to select important channels by solving least square reconstruction. Zhuanget al.  introduce additional losses to consider the discriminative power of channels and selected the most discriminative channels for the portable network.
2.2 Efficient Blocks Design
Instead of directly reducing the computational complexity of a pre-trained heavy neural network, lot of works focused on designing novel blocks or operations to replace the conventional convolution filters. Iandola et al.  introduced a bottleneck architecture to largely decrease the computation cost of CNNs. Howard et al.  designed MobileNet, which decompose the conventional convolution filters into the point-wise and depth-wise convolution filters with much fewer FLOPs. Zhang et al.  combined group convolutions  and a channel shuffle operation to build efficient neural networks with fewer computations. Hu et al.  proposed the squeeze and excitation block, which focuses on the relationship of channels by modeling interdependencies between channels, to improve the performance at slight additional computational cost. Wu et al.  presented a parameter-free “shift” operation with zero flop and zero parameter to replace conventional filters and largely reduce the computational and storage cost of CNNs. Zhong et al.  further pushed the shift-based primitives into channel shift, address shift and shortcut shift to reduce the inference time on GPU while keep the performance. Wang et al.  developed versatile convolution filters to generate more useful features utilizing fewer calculations and parameters.
2.3 Knowledge Distillation
Besides eliminating redundant weights or filters in deep convolutional neural networks, Hinton et al. 
proposed the knowledge distillation (KD) scheme, which transfer useful information from a heavy teacher network to a portable student network by minimizing the Kullback-Leibler divergence between their outputs. Besides mimic the final outputs of the teacher networks, Romeroet al.  exploit the hint layer to distill the information in features of the teacher network to the student network. You et al.  utilized multiple teachers to guide the training of the student network and achieve better performance. Yim et al.  regarded the relationship between features from two layers in the teacher network as a novel knowledge and introduced the FSP (Flow of Solution Procedure) matrix to transfer this kind of information to the student network.
Nevertheless, the compressed networks using these algorithms still contain massive multiplications, which costs enormous computation resources. As a result, subtractions or additions are of much lower computational complexities when compared with multiplications. However, they have not been widely investigated in deep neural networks, especially in the widely used convolutional networks. Therefore, we propose to minimize the numbers of multiplications in deep neural networks by replacing them with subtractions or additions.
3 Networks without Multiplication
Consider a filter in an intermediate layer of the deep neural network, where kernel size is , input channel is and output channel is . The input feature is defined as , where and are the height and width of the feature, respectively. The output feature indicates the similarity between the filter and the input feature,
where is a pre-defined similarity measure. If cross-correlation is taken as the metric of distance, i.e. , Eq. (1) becomes the convolution operation. Eq. (1) can also implies the calculation of a fully-connected layer when . In fact, there are many other metrics to measure the distance between the filter and the input feature. However, most of these metrics involve multiplications, which bring in more computational cost than additions.
3.1 Adder Networks
We are therefore interested in deploying distance metrics that maximize the use of additions.
distance calculates the sum of the absolute differences of two points’ vector representations, which contains no multiplication. Hence, by calculatingdistance between the filter and the input feature, Eq. (1) can be reformulated as
Addition is the major operation in distance measure, since subtraction can be easily reduced to addition by using complement code. With the help of distance, similarity between the filters and features can be efficiently computed.
) can measure the similarity between filters and inputs, there are some differences in their outputs. The output of a convolution filter, as a weighted summation of values in the input feature map, can be positive or negative, but the output of an adder filter is always negative. Hence, we resort to batch normalization for help, and the output of adder layers will be normalized to an appropriate range and all the activation functions used in conventional CNNs can then be used in the proposed AdderNets. Although the batch normalization layer involves multiplications, its computational cost is significantly lower than that of the convolutional layers and can be omitted. Considering a convolutional layer with a filter, an input and an output , the computation complexity of convolution and batch normalization is and , respectively. In practice, given an input channel number and a kernel size in ResNet , we have . Since batch normalization layer has been widely used in the state-of-the-art convolutional neural networks, we can simply upgrade these networks into AddNets by replacing their convolutional layers into adder layers to speed up the inference and reduces the energy cost.
Intuitively, Eq. (1) has a connection with template matching  in computer vision, which aims to find the parts of an image that match the template. in Eq. (1) actually works as a template, and we calculate its matching scores with different regions of the input feature . Since various metrics can be utilized in template matching, it is natural that distance can be utilized to replace the cross-correlation in Eq. (1).
Neural networks utilize back-propagation to compute the gradients of filters and stochastic gradient descent to update the parameters. In CNNs, the partial derivative of output featureswith respect to the filters is calculated as:
where and . To achieve a better update of the parameters, it is necessary to derive informative gradients for SGD. In AdderNets, the partial derivative of with respect to the filters is:
where denotes the sign function and the value of the gradient can only take +1, 0, or -1.
Considering the derivative of -norm
Eq. (4) can therefore lead to a signSGD  update of -norm. However, signSGD almost never takes the direction of steepest descent and the direction only gets worse as dimensionality grows . It is unsuitable to optimize the neural networks of a huge number of parameters using signSGD. Therefore, we propose using Eq. (5) to update the gradients in our AdderNets. The convergence of taking these two kinds of gradient will be further investigated in the supplementary material. Therefore, by utilizing the full-precision gradient, the filters can be updated precisely.
Besides the gradient of the filters, the gradient of the input features is also important for the update of parameters. Therefore, we also use the full-precision gradient (Eq. (5)) to calculate the gradient of . However, the magnitude of the full-precision gradient may be larger than +1 or -1. Denote the filters and inputs in layer as and . Different from which only affects the gradient of itself, the change of would influence the gradient in not only layer but also layers before layer
according to the gradient chain rule. If we use the full-precision gradient instead of the sign gradient offor each layer, the magnitude of the gradient in the layers before this layer would be increased, and the discrepancy brought by using full-precision gradient would be magnified. To this end, we clip the gradient of to to prevent gradients from exploding. Then the partial derivative of output features with respect to the input features is calculated as:
where denotes the HardTanh function:
3.3 Adaptive Learning Rate Scaling
In conventional CNNs, assuming that the weights and the input features are independent and identically distributed following normal distribution, the variance of the output can be roughly estimated as:
If variance of the weight is , the variance of output would be consistent with that of the input, which will be beneficial for the information flow in the neural network. In contrast, for AdderNets, the variance of the output can be approximated as:
when and follow normal distributions. In practice, the variance of weights is usually very small , e.g. or in an ordinary CNN. Hence, compared with multiplying with a small value in Eq. (8), the addition operation in Eq. (9) tends to bring in a much larger variance of outputs in AdderNets.
We next proceed to show the influence of this larger variance of outputs on the update of AdderNets. To promote the effectiveness of activation functions, we introduce batch normalization after each adder layer. Given input over a mini-batch , the batch normalization layer can be denoted as:
where and are parameters to be learned, and and are the mean and variance over the mini-batch, respectively. The gradient of loss with respect to is then calculated as:
Given a much larger variance in Eq. (9), the magnitude of the gradient w.r.t in AdderNets would be much smaller than that in CNNs according to Eq. (11), and then the magnitude of the gradient w.r.t the filters in AdderNets would be decreased as a result of gradient chain rule.
|Model||Layer 1||Layer 2||Layer 3|
Table 1 reports the -norm of gradients of filters in LeNet-5-BN using CNNs and AdderNets on the MNIST dataset during the 1st iteration. LeNet-5-BN denotes the LeNet-5  adding an batch normalization layer after each convolutional layer. As shown in this table, the norms of gradients of filters in AdderNets are much smaller than that in CNNs, which could slow down the update of filters in AdderNets.
A straightforward idea is to directly adopt a larger learning rate for filters in AdderNets. However, it is worth noticing that the norm of gradient differs much in different layers of AdderNets as shown in Table 1, which requests special consideration of filters in different layers. To this end, we propose an adaptive learning rate for different layers in AdderNets. Specifically, the update for each adder layer is calculated by
where is a global learning rate of the whole neural network (e.g. for adder and BN layers), is the gradient of the filter in layer and is its corresponding local learning rate. As filters in AdderNets act subtraction with the inputs, the magnitude of filters and inputs are better to be similar to extract meaningful information from inputs. Because of the batch normalization layer, the magnitudes of inputs in different layers have been normalized, which then suggests a normalization for the magnitudes of filters in different layers. The local learning rate can therefore be defined as:
where denotes the number of elements in , and is a hyper-parameter to control the learning rate of adder filters. By using the proposed adaptive learning rate scaling, the adder filters in different layers can be updated with nearly the same step. The training procedure of the proposed AdderNet is summarized in Algorithm 1.
In this section, we implement experiments to validate the effectiveness of the proposed AdderNets on several benchmark datasets, including MNIST, CIFAR and ImageNet. Ablation study and visualization of features are provided to further investigate the proposed method. The experiments are conducted on NVIDIA Tesla V100 GPU in PyTorch.
4.1 Experiments on MNIST
To illustrate the effectiveness of the proposed AdderNets, we first train a LeNet-5-BN  on the MNIST dataset. The images are resized to and are pro-precessed following . The networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum were set as
and 0.9, respectively. We train the networks for 50 epochs using the cosine learning rate decay with an initial learning rate 0.1. The batch size is set as 256. For the proposed AdderNets, we replace the convolutional filters in LeNet-5-BN with our adder filters. Note that the fully connected layer can be regarded as a convolutional layer, we also replace the multiplications in the fully connect layers with subtractions.
The convolutional neural network achieves a accuracy with 435K multiplications and 435K additions. By replacing the multiplications in convolution with additions, the proposed AdderNet achieves a 99.4% accuracy, which is the same as that of CNNs, with 870K additions and almost no multiplication.In fact, the theoretical latency of multiplications in CPUs is also larger than that of additions and subtractions. There is an instruction table 111www.agner.org/optimize/instruction_tables.pdf which lists the instruction latencies, throughputs and micro-operation breakdowns for Intel, AMD and VIA CPUs. For example, in VIA Nano 2000 series, the latency of float multiplication and addition is 4 and 2, respectively. The AdderNet using LeNet-5 model will have 1.7M latency while CNN will have 2.6M latency in this CPU. In conclusion, the AdderNet can achieve similar accuracy with CNN but have fewer computational cost and latency. Noted that CUDA and cuDNN optimized adder convolutions are not yet available, we do not compare the actual inference time.
4.2 Experiments on CIFAR
We then evaluate our method on the CIFAR dataset, which consist of pixel RGB color images. Since the binary networks  can use the XNOR operations to replace multiplications, we also compare the results of binary neural networks (BNNs). We use the same data augmentation and pro-precessing in He et al.  for training and testing. Following Zhou et al. , the learning rate is set to 0.1 in the beginning and then follows a polynomial learning rate schedule. The models are trained for 400 epochs with a 256 batch size. We follow the general setting in binary networks to set the first and last layers as full-precision convolutional layers. In AdderNets, we use the same setting for a fair comparison.
The classification results are reported in Table 2. Since computational cost in batch normalization layer, the first layer and the last layer are significantly less than other layers, we omit these layers when counting FLOPs. We first evaluate the VGG-small model  in the CIFAR-10 and CIFAR-100 dataset. As a result, the AdderNets achieve nearly the same results (93.72% in CIFAR-10 and 72.64% in CIFAR-100) with CNNs (93.80% in CIFAR-10 and 72.73% in CIFAR-100) with no multiplication. Although the model size of BNN is much smaller than those of AdderNet and CNN, its accuracies are much lower (89.80% in CIFAR-10 and 65.41% in CIFAR-100). We then turn to the widely used ResNet models (ResNet-20 and ResNet-32) to further investigate the performance of different networks. As for the ResNet-20, Tte convolutional neural networks achieve the highest accuracy (i.e. 92.25% in CIFAR-10 and 68.14% in CIFAR-100) but with a large number of multiplications (41.17M). The proposed AdderNets achieve a 91.84% accuracy in CIFAR-10 and a 67.60% accuracy in CIFAR-100 without multiplications, which is comparable with CNNs. In contrast, the BNNs only achieve 84.87% and 54.14% accuracies in CIFAR-10 and CIFAR-100. The results in ResNet-32 also suggest that the proposed AdderNets can achieve similar results with conventional CNNs.
|Model||Method||#Mul.||#Add.||XNOR||Top-1 Acc.||Top-5 Acc.|
|(a) Visualization of filters of AdderNets||(b) Visualization of filters of CNNs|
4.3 Experiments on ImageNet
We next conduct experiments on the ImageNet dataset , which consist of pixel RGB color images. We use ResNet-18 model to evaluate the proposed AdderNets follow the same data augmentation and pro-precessing in He et al. . We train the AdderNets for 150 epochs utilizing the cosine learning rate decay . These networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum are set as and 0.9, respectively. The batch size is set as 256 and the hyper-parameter in AdderNets is the same as that in CIFAR experiments.
Table 3 shows the classification results on the ImageNet dataset by exploiting different nerual networks. The convolutional neural network achieves a 69.8% top-1 accuracy and an 89.1% top-5 accuracy in ResNet-18. However, there are 1.8G multiplications in this model, which bring enormous computational complexity. Since the addition operation has smaller computational cost than multiplication, we propose AdderNets to replace the multiplications in CNNs with subtractions. As a result, our AdderNet achieve a 66.8% top-1 accuracy and an 87.4% top-5 accuracy in ResNet-18, which demonstrate the adder filters can extract useful information from images. Rastegari et al.  proposed the XNOR-net to replace the multiplications in neural networks with XNOR operations. Although the BNN can achieve high speed-up and compression ratio, it achieves only a 51.2% top-1 accuracy and a 73.2% top-5 accuracy in ResNet-18, which is much lower than the proposed AdderNet. We then conduct experiments on a deeper architecture (ResNet-50). The BNN could only achieve a 55.8% top-1 accuracy and a 78.4% top-5 accuracy using ResNet-50. In contrast, the proposed AdderNets can achieve a 74.9% top-1 accuracy and a 91.7% top-5 accuracy, which is closed to that of CNN (76.2% top-1 accuracy and 92.9% top-5 accuracy).
4.4 Visualization Results
Visualization on features. The AdderNets utilize the -distance to measure the relationship between filters and input features instead of cross correlation in CNNs. Therefore, it is important to further investigate the difference of the feature space in AdderNets and CNNs. We train a LeNet++ on the MNIST dataset following . We train a LeNet++ on the MNIST dataset, which has six convolutional layers and a fully-connected layer for extracting powerful 3D features. Numbers of neurons in each convolutional layer are 32, 32, 64, 64, 128, 128, and 2, respectively. For the proposed AdderNets, convolutional and fully connected layers are replaced with the proposed add filters.
The visualization results are shown in Figure 1
. The convolutional neural network calculates the cross correlation between filters and inputs. If filters and inputs are approximately normalized, convolution operation is then equivalent to calculate cosine distance between two vectors. That is probably the reason that features in different classes are divided by their angles in Figure1. In contrast, AdderNets utilize the
-norm to distinguish different classes. Thus, features tend to be clustered towards different class centers. The visualization results demonstrate that the proposed AdderNets could have the similar discrimination ability to classify images as CNNs.
Visualization on filters. We visualize the filters of the LeNet-5-BN network in Figure 2. Although the AdderNets and CNNs utilize different distance metrics, filters of the proposed adder networks (see Figure 2 (a)) still share some similar patterns with convolution filters (see Figure 2 (b)). The visualization experiments further demonstrate that the filters of AdderNets can effectively extract useful information from the input images and features.
|(a) Accuracy||(b) Loss|
Visualization on distribution of weights As the proposed AdderNets use . We then visualize the distribution of weights for the 3th convolution layer on LeNet-5-BN. As shown in Figure 4, the distribution of weights with AdderNets is close to a Laplace distribution while that with CNNs looks more like a Gaussian distribution. In fact, the prior distribution of -norm is Laplace distribution  and that of -norm is Gaussian distribution  and the -norm is exactly same as the cross correlation, which will be analyzed in the supplemental material.
4.5 Ablation Study
We propose to use a full-precision gradient to update the filters in our adder filters and design an adaptive learning rate scaling for deal with different layers in AdderNets. It is essential to evaluate the effectiveness of these components. We first train the LeNet-5-BN without changing its learning rate, which results in 54.91% and 29.26% accuracies using full-precision gradient and sign gradient, respectively. The networks can be hardly trained since its gradients are very small. Therefore, it is necessary to increase the learning rate of adder filters.
We directly increase the learning rate for filters in AdderNets by 100, which achieves best performance with full-precision gradient compared with other values from the pool . As shown in Figure 3, the AdderNets using adaptive learning rate (ALR) and increased learning rate (ILR) achieve 97.99% and 97.72% accuracy with sign gradient, which is much lower than the accuracy of CNN (99.40%). Therefore, we propose the full-precision gradient to precisely update the weights in AdderNets. As a result, the AdderNet with ILR achieves a 98.99% accuracy using the full-precision gradient. By using the adaptive learning rate (ALR), the AdderNet can achieve a 99.40% accuracy, which demonstrate the effectiveness of the proposed ALR method.
The role of classical convolutions used in deep CNNs is to measure the similarity between features and filters, and we are motivated to replace convolutions with more efficient similarity measure. We investigate the feasibility of replacing multiplications by additions in this work. An AdderNet is explored to effectively use addition to build deep neural networks with low computational costs. This kind of networks calculate the -norm distance between features and filters. Corresponding optimization method is developed by using regularized full-precision gradients. Experiments conducted on benchmark datasets show that AdderNets can well approximate the performance of CNNs with the same architectures, which could have a huge impact on future hardware design. Visualization results also demonstrate that the adder filters are promising to replace original convolution filters for computer vision tasks. In our future work, we will investigate quantization results for AdderNets to achieve higher speed-up and lower energy comsumption, as well as the generality of AdderNets not only for image classification but also for detection and segmentation tasks.
-  (2018) Convergence rate of sign stochastic gradient descent for non-convex functions. Cited by: §3.2.
-  (2018) SignSGD: compressed optimisation for non-convex problems. arXiv preprint arXiv:1802.04434. Cited by: §3.2.
-  (2009) Template matching techniques in computer vision: theory and practice. John Wiley & Sons. Cited by: §3.1.
-  (2017) Deep learning with low precision by half-wave gaussian quantization. In CVPR, pp. 5918–5926. Cited by: §1, §4.2.
-  (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In NeuriPS, pp. 3123–3131. Cited by: §1.
-  (2014) Exploiting linear structure within convolutional networks for efficient evaluation. In NeuriPS, Cited by: §2.1.
Understanding the difficulty of training deep feedforward neural networks.
Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §3.3.
-  (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: §2.1.
-  (2016) Deep residual learning for image recognition. In CVPR, pp. 770–778. Cited by: §3.1, §4.2, §4.3.
-  (2017) Channel pruning for accelerating very deep neural networks. In ICCV, pp. 1389–1397. Cited by: §2.1.
-  (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §2.3.
-  (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §2.2.
-  (2016) Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250. Cited by: §2.1.
-  (2018) Squeeze-and-excitation networks. In CVPR, pp. 7132–7141. Cited by: §2.2.
-  (2016) Binarized neural networks. In NeuriPS, pp. 4107–4115. Cited by: §1.
-  (2017) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. Cited by: §2.2.
-  (2012) Imagenet classification with deep convolutional neural networks. In NeuriPS, pp. 1097–1105. Cited by: §1, §4.3.
-  (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §3.3, §4.1.
-  (2015) Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431–3440. Cited by: §1.
-  (2016) Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §4.1, §4.3.
-  (2017) Thinet: a filter level pruning method for deep neural network compression. In ICCV, pp. 5058–5066. Cited by: §2.1.
-  (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In ECCV, pp. 525–542. Cited by: §1, §4.3.
-  (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In NeuriPS, pp. 91–99. Cited by: §1.
-  (2003) On l2-norm regularization and the gaussian prior. Cited by: §4.4.
-  (2014) Fitnets: hints for thin deep nets. arXiv preprint arXiv:1412.6550. Cited by: §2.3.
-  (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §1.
-  (1986) The history of statistics: the measurement of uncertainty before 1900. Harvard University Press. Cited by: §4.4.
-  (2018) Learning versatile filters for efficient convolutional neural networks. In NeuriPS, pp. 1608–1618. Cited by: §2.2.
-  (2016) Cnnpack: packing convolutional neural networks in the frequency domain. In NeuriPS, pp. 253–261. Cited by: §2.1.
A discriminative feature learning approach for deep face recognition. In ECCV, Cited by: §4.4.
-  (2016) A discriminative feature learning approach for deep face recognition. In ECCV, pp. 499–515. Cited by: §1.
-  (2018) Shift: a zero flop, zero parameter alternative to spatial convolutions. In CVPR, pp. 9127–9135. Cited by: §2.2.
-  (2017) Aggregated residual transformations for deep neural networks. In CVPR, pp. 1492–1500. Cited by: §2.2.
A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In CVPR, pp. 4133–4141. Cited by: §2.3.
-  (2017) Learning from multiple teacher networks. In SIGKDD, pp. 1285–1294. Cited by: §2.3.
-  (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In CVPR, pp. 6848–6856. Cited by: §2.2.
-  (2018) Shift-based primitives for efficient convolutional neural networks. arXiv preprint arXiv:1809.08458. Cited by: §2.2.
-  (2016) Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160. Cited by: §1, §4.2.
-  (2018) Discrimination-aware channel pruning for deep neural networks. In NeuriPS, pp. 875–886. Cited by: §2.1.