Introduction
Problem Statement
To harness the power of deep convolutional networks in embedded and largescale application domains requires energyefficient implementation, leading to great interest in lowprecision networks suitable for deployment with lowprecision hardware accelerators. Consequently there have been a flurry of methods for quantizing both the weights and activations of these networks [Jacob et al.2017, Courbariaux, Bengio, and David2015, Polino, Pascanu, and Alistarh2018, Xu et al.2018, Baskin et al.2018, Mishra et al.2017, Choi et al.2018].
A common perception is that 8bit networks offer the promise of decreased computational complexity with little loss in accuracy, without any need to retrain. However, the published accuracies are typically lower for the quantized networks than for the corresponding fullprecision net [Migacz2017]. Even training 8bit networks from scratch fails to close this gap [Jacob et al.2017] (See Table 1). The situation is even worse for 4bit precision. To our knowledge, for the ImageNet classification benchmark, no method has been able to match the accuracy of the corresponding fullprecision network when quantizing both the weights and activations at the 4bit level. Closing this performance gap has been an important open problem until now.
Network  Method  Precision (w,a)  Accuracy (% top1)  Accuracy (% top5) 
ResNet18  baseline  32,32  69.76  89.08 
ResNet18  Apprentice  4,8  70.40   
ResNet18  FAQ (This paper)  8,8  70.02  89.32 
ResNet18  FAQ (This paper)  4,4  69.82  89.10 
ResNet18  Joint Training  4,4  69.3   
ResNet18  UNIQ  4,8  67.02   
ResNet18  Distillation  4,32  64.20   
ResNet34  baseline  32,32  73.30  91.42 
ResNet34  FAQ (This paper)  8,8  73.71  91.63 
ResNet34  FAQ (This paper)  4,4  73.31  91.32 
ResNet34  UNIQ  4,32  73.1   
ResNet34  Apprentice  4,8  73.1   
ResNet34  UNIQ  4,8  71.09   
ResNet50  baseline  32,32  76.15  92.87 
ResNet50  FAQ (This paper)  8,8  76.52  93.09 
ResNet50  FAQ (This paper)  4,4  76.27  92.89 
ResNet50  IOA  8,8  74.9   
ResNet50  Apprentice  4,8  74.7   
ResNet50  UNIQ  4,8  73.37   
ResNet152  baseline  32,32  78.31  94.06 
ResNet152  FAQ (This paper)  8,8  78.54  94.07 
Inceptionv3  baseline  32,32  77.45  93.56 
Inceptionv3  FAQ (This paper)  8,8  77.60  93.59 
Inceptionv3  IOA  8,8  74.2  92.2 
Densenet161  baseline  32,32  77.65  93.80 
Densenet161  FAQ (This paper)  8,8  77.84  93.91 
VGG16bn  baseline  32,32  73.36  91.50 
VGG16bn  FAQ (This paper)  8,8  73.66  91.56 
from the PyTorch model zoo. Other results reported in the literature are shown for comparison, with methods exceeding or matching the top1 baseline in bold. Precision is in bits, where w= weight , and a=activation function. Accuracy is reported for the ImageNet classification benchmark. Compared methods: Apprentice
[Mishra and Marr2017], Distillation [Polino, Pascanu, and Alistarh2018], UNIQ [Baskin et al.2018], IOA [Jacob et al.2017], Joint Training [Jung et al.2018]. Since we found that only one epoch was necessary to train any of the models we tried with 8bit quantization, we were able to quickly test new models to include in this table. However, training 4bit networks, especially the larger models, takes much longer, and we therefore were not able to test as wide a variety of 4bit networks, which we leave for future work.Contributions
Guided by theoretical convergence bounds for stochastic gradient descent (SGD), we propose finetuning, training pretrained highprecision networks for lowprecision inference, by combating noise during the training process as a method for discovering both 4bit and 8bit integer networks. We evaluate the proposed solution on the ImageNet benchmark on a representative set of stateoftheart networks at 8bit and 4bit quantization levels (Table 1). Contributions include the following.

We demonstrate 8bit scores on ResNet18, 34, 50, and 152, Inceptionv3, Densenet161, and VGG16 exceeding the fullprecision scores after just one epoch of finetuning.

We present the first evidence of 4 bit, fully integer networks which match the accuracy of the original fullprecision networks on the ImageNet benchmark.

We present empirical evidence for gradient noise that is introduced by weight quantization. This gradient noise increases with decreasing precision and may account for the difficulty in finetuning lowprecision networks.

We demonstrate that reducing noise in the training process through the use of larger batches provides further accuracy improvements.

We find direct empirical support that, as with 8bit quantization, near optimal 4bit quantized solutions exist close to highprecision solutions, making training from scratch unnecessary.
Proposed solution
Our goal is to quantize existing networks to 8 and 4 bits for both weights and activations, without increasing the computational complexity of the network to compensate, e.g. with modifications such as feature expansion, while achieving accuracies that match or exceed the corresponding fullprecision networks. For precision below 8 bits, the typical method is to train the model using SGD while enforcing the constraints [Courbariaux, Bengio, and David2015]. There are at least two problems faced when training lowprecision networks: learning in the face of lowprecision weights and activations, and capacity limits in the face of these constraints. Assuming that capacity is sufficient and a lowprecision solution exists, we wish to solve the first problem, that is, find a way to train lowprecision networks to obtain the best possible score subject to capacity limits.
We use lowprecision training to optimize quantized networks. We hypothesize that noise introduced by quantizing weights and activations during training is the crux of the problem and is similar to gradient noise inherent to stochastic gradient descent. In support of this idea, Polino et al. (2018) showed that unbiased quantization of weights is equivalent to adding Gaussian noise to the activations, in expectation. The problem is then to find ways to overcome this noise.
SGD requires ^{1}^{1}1
This assumes a convex loss function, a simpler case.
(1) 
iterations to find a approximate optimal value, where is the gradient noise level, is related to the curvature of the convex function, and are the initial and optimal network parameters, respectively, and is the error tolerance [Meka2017].
This suggests two ways to minimize the final error. First, start closer to the solution, i.e. minimize . In fact Goodfellow et al [Goodfellow, Vinyals, and Saxe2014] suggest that much of the training time of SGD is due to the length of the trajectory needed to arrive at a solution. We therefore start with pretrained models for quantization, rather than training from scratch as is done customarily (although see [Zhou et al.2017, Baskin et al.2018]). Second, minimize . To do this, we combine wellknown techniques to combat noise: 1) larger batches which reduce the gradient noise proportional to the square root of the batch size [Goodfellow et al.2016], and 2) proper learning rate annealing with longer training time. We refer to this technique as Finetuning after quantization, or FAQ.
We argue that the method of finetuning for quantization is the right approach in the sense that it directly optimizes the proper objective function, the final score, rather than proxies which measure distance from the fullprecision network parameters [Migacz2017].
Background
Network Quantization
In the quest for training stateoftheart lowprecision networks, there has been a vast diversity in how the precision constraints are imposed as well as in approaches used in their training. Typical variations in applying lowprecision constraints include allowing nonuniform quantization of weights and activations [Miyashita, Lee, and Murmann2016, Zhou et al.2017, Cai et al.2017] where the discrete dictionary may depend on the data, stochastic quantization [Polino, Pascanu, and Alistarh2018, Courbariaux, Bengio, and David2015]. Approaches to training these networks include distillation [Polino, Pascanu, and Alistarh2018], layerwise quantization and retraining [Xu et al.2018], introducing noise during training [Baskin et al.2018], increasing features [Mishra et al.2017]
, learning quantizationspecific parameters using backpropagation
[Choi et al.2018], using Stochastic VarianceReduced Gradient instead of SGD [Sa et al.2018], and relaxation methods resembling annealing [Yin et al.2018].With notable exception of a few papers dealing with binary or trinary networks [Courbariaux, Bengio, and David2015, Rastegari et al.2016, Courbariaux and Bengio2016]^{2}^{2}2Even these networks may have occasional floating point scaling steps between layers.
, most of the literature on lowprecision networks constrain the number of discrete values that the weights and activations assume but otherwise allow them to be floatingpoint numbers. In addition, lowprecision constraints are not necessarily imposed on batchnormalization constants, averagepooling results etc. in these networks. This is in contrast to how 8bit integer networks are supported by TensorRT as well as TensorFlow framework, where all the parameters and activations are quantized to 8bit fixedpoint integers (see for example
[Jacob et al.2017]). Recent attempts [Wu et al.2018] at training lowprecision networks with integer constraints have hinted at the possibility of porting such networks to commercially available hardware for inference^{3}^{3}3NVIDIA’s recently announced Turing architecture supports 4bit integer operations, for example..We focus on training networks with both weights and activations constrained to be either 4 bit, or 8bit fixedpoint integers, and restrict all other scalar multiplicative constants (for example, batchnormalization) in the network to be 8bit integers and additive constants (for example, bias values) to be 32bit integers.
Lowprecision Finetuning Methods
We start with pretrained, highprecision networks from the PyTorch model zoo, quantize, and then finetune for a variable number of epochs depending on the precision. We hypothesize that noise is the limiting factor in finding lowprecision solutions, and use wellknown methods to over come noise in training. Otherwise, we use the techniques of [Courbariaux, Bengio, and David2015, Esser et al.2016] to train lowprecision networks. Details of this procedure are described next.
Fixed point quantizer
The quantizer we use throughout this paper is parametrized by the precision (in number of bits) , and the location of the least significantbit relative to the radix , and denoted by . A calibration phase during initialization is used to determine a unique
for each layer of activations, which remains fixed subsequently throughout the finetuning. Similarly, each layer’s weight tensor as well as other parameter tensors are assigned a unique
and this quantity is determined during each training iteration. The procedures for determining for activations and other parameters are described in the following subsections. A given scalar is quantized to a fixedpoint integer for unsigned values, and for signed values.Given a desired network precision of either 8 or 4 bits, we quantize all weights and activations to this level. In the 4bit case, we leave the first and last layer weights at 8 bits and allow fullprecision (32bit fixed point) linear activations in the last, fullyconnected layer [Courbariaux, Bengio, and David2015, Esser et al.2016]. In addition, the input to that last, fullyconnected layer is also allowed to be an 8bit integer as is the common practice in the literature. In such networks containing bit internal layers and bit final layer, the transition from bit to
bit is facilitated by the last ReLU activation layer in the network. Every other ReLU layer’s output tensor is quantized to a
bit integer tensor.Quantizing network parameters
Given a weight tensor , SGD is used to update as usual but a fixedpoint version is used for inference and gradient calculation [Courbariaux, Bengio, and David2015, Esser et al.2016]. The fixedpoint version is obtained by applying elementwise. The quantization parameter for a given weight tensor is updated during every iteration and computed as follows: We first determine a desired quantization stepsize by first clipping the weight tensor at a constant multiple^{4}^{4}4The constant, in general, depends on the precision. We used a constant of 4.12 for all our 4bit experiments.
of its numerically estimated standarddeviation, and then dividing this range into equallysized bins. Finally, the required constant
is calculated as . All other parameters, including those used in batchnormalization, use .Initialization
Network parameters are first initialized from an available pretrained model file (https://pytorch.org/docs/stable/torchvision/models.html). Next, the quantization parameter for each layer of activation is calibrated using the following procedure: Following [Jacob et al.2017], we use a technique of running several (5) training data batches through the unquantized network to determine the maximum range for uniform quantization. Specifically, for each layer, is the maximum across all batches of the 99.99th percentile of the batch of activation tensor of that layer, rounded up to the next even power of two. This percentile level was found to give the best initial validation score for 8bit layers, while 99.9 was best for layers with 4bit ReLUs. The estimated , in turn, determines the quantization parameter for that tensor. For ReLU layers, the clipped tensor in the range is then quantized using . Once these activation function parameters are determined for each of the tensors, they are kept fixed during subsequent finetuning.
For control experiments which start from random initialization rather than pretrained weights, we did not perform this ReLU calibration step, since initial activation ranges are unlikely to be correct. In these experiments, we set the maximum range of all ReLU activation functions to , where is the number of bits of precision.
Training
To train such a quantized network we use the typical procedure of keeping a floating point copy of the weights which are updated with the gradients as in normal SGD, and quantize weights and activations in the forward pass [Courbariaux, Bengio, and David2015, Esser et al.2016], clipping values that fall above the maximum range as described above. We also use the straight through estimator [Bengio, Léonard, and Courville2013] to pass the gradient through the quantization operator.
For finetuning pretrained 8bit networks, since the initial quantization is already within a few percent of the fullprecision network in many cases, we find that we need only a single additional epoch of training, with a learning rate of after the initial quantization step, and no other changes are made to the original training parameters during finetuning.
However, for 4bit networks, the initial quantization alone gives poor performance, and matching the performance of the fullprecision net requires training for 110 additional epochs using exponential decay of the learning rate such that the learning rate drops from the initial rate of 0.0015 (slightly higher than the final learning rate used to train the pretrained net) to a final value of . Accordingly we multiply the learning rate by 0.936 after each epoch for a 110 epoch finetuning training run. In addition, for the smallest ResNet 4bit network, ResNet18, the weight decay parameter is reduced from used to train ResNet models to assuming that less regularization is needed with smaller, lower precision networks. The batch size used was 256 split over 2 GPUs. SGD with momentum was used for optimization. Software was implemented using PyTorch.
Experiments
Finetuning matches or exceeds the accuracy of the initial highprecision network and outperforms other methods
FAQ trained 8bit networks outperform all comparable quantization methods in all but one instance and exceeded pretrained fp32 network accuracy with only one epoch of training following quantization for all networks explored (Table 1). Immediately following quantization, network accuracy was nearly at the level of the pretrained networks (data not shown) with one exception, Inceptionv3, which started at 72.34% top1. Since the networks started close to a good solution, they did not require extensive finetuning to return to and surpass pretrained networks.
FAQ trained 4bit network accuracy outperforms all comparable quantization methods, exceeding the next closest approach by over % for ResNet18 [Jung et al.2018], and matched or exceeded pretrained fp32 network accuracy. FAQ 4bit networks required significantly longer finetuning – 110 epochs – for networks trained, ResNet18, ResNet34, and ResNet50. In constrast to the 8bit cases, immediately following quantization, network accuracy dropped precipitously, requiring significant finetuning to match and surpass the pretrained networks.
FAQ trained 4bit network accuracy is sensitive to several hyperparameters (Table
2). In summary, longer training duration, initialization from a pretrained model, larger batch size, lower weight decay, and activation calibration all contribute to improved accuracy when training the 4bit ResNet18 network, while the exact learning rate decay schedule contributed the least. We elaborate on some of these results subsequently.Epochs  Pretrained  Batch size  Learning rate  Weight  Activation  Accuracy  Change 

schedule  decay  calibration  (% top1)  
110  Yes  256  exp.  0.00005  Yes  69.82   
60  Yes  400  exp.  0.00005  Yes  69.40  0.22 
110  No  256  exp.  0.00005  Yes  69.24  0.58 
165*  Yes  2562048  exp.  0.00005  Yes  69.96  +0.14 
110  Yes  256  step  0.00005  Yes  69.90  +0.08 
110  Yes  256  exp.  0.0001  Yes  69.59  0.23 
110  Yes  256  exp.  0.00005  No  69.19  0.63 
Longer training time was necessary for 4bit networks
For the 4bit network, longer finetuning improved accuracy (Table 2), potentially by averaging out gradient noise introduced by discretization [Polino, Pascanu, and Alistarh2018]. We explored sensitivity to shortening finetuning by repeating the experiment for 30, 60 and 110 epochs, with the same initial and final learning rates in all cases, resulting in top1 accuracies of 69.30, 69.40, and 69.68 respectively. The hyperparameters were identical, except the batch size was increased from 256 to 400. These results indicate that training longer was necessary.
Quantizing a pretrained network improves accuracy
Initializing networks with a discretized pretrained network followed by finetuning improved accuracy compared with training a quantized network from random initialization for the same duration (Table 2), suggesting proximity to a fullprecision network enhances lowprecision finetuning. For a 4bit network, we explored the contribution of the pretrained network by training two Resnet18 networks with standard initialization for 110 epochs, one with the previous learning rate decay schedule^{5}^{5}5We used a higher initial learning rate of , equal to that used to train the fullprecision net from scratch, with a decay factor of , such that final learning rate was . and the other with a learning rate from [Choi et al.2018], dropping by a factor of at epochs , , , and , plus an additional drop to at epoch to match the finetuning experiments. These two approaches reached top1 accuracies of % and %, respectively – both less than FAQ’s accuracy after 30 epochs and more than 0.5% short of FAQ’s accuracy after 110 epochs. The one FAQ change that degraded accuracy the most was neglecting to calibrate activation ranges for each layer using the pretrained model, which dropped accuracy by %. This is another possible reason why training 8bit networks from scratch has not achieved higher scores in the past [Jacob et al.2017].
Reducing noise with larger batch size improves accuracy
Finetuning with increasing batch size improved accuracy (Table 2), suggesting gradient noise limits lowprecision accuracy. For a 4bit network, we explored the contribution of increasing FAQ batch size with a Resnet18 network, which increased top1 validation accuracy to %. We scheduled batch sizes, starting at and doubled at epochs , , , reaching as maximum batch size^{6}^{6}6To simulate batch sizes larger than within memory constraints, we used virtual batches, updating the weights once every actual batches with the gradient average for effective batch size ., each doubling effecting a factor drop in gradient noise, which is proportional to square root of batch size. We used epochs to approximately conserve the number of weight updates as the epochs batchsize case as our focus here is not training faster but reducing gradient noise to improve final accuracy.
The exact form of exponential learning rate decay was not critical
Replacing the exponential learning rate decay with a typical step decay which reduced the learning rate from to in 3 steps of 0.1 at epochs 30, 60, and 90, improved results slightly (+0.08). This suggests that FAQ is insensitive to the exact form of exponential decrease in learning rate.
Reducing weight decay improves accuracy for ResNet18 but not for ResNet34 or 50 for 4bit networks
For the 4bit ResNet18 network, increasing weight decay from to , used in the original pretrained network, reduced the validation accuracy by % (Table 2). This results suggest that the smaller ResNet18 may lack sufficient capacity to compensate for lowprecision weights and activations. In contrast, for the 4bit ResNet34 and 50 networks, the best results used weight decay of .
Quantizing weights introduces gradient noise
Weight discretization increases gradient noise for 8, 4, and 2bit networks^{7}^{7}72bit network is used only to demonstrate how discretizationinduced gradient noise varies with bit precision.. We define the increase in gradient noise due to weight discretization as the angular difference between the step taken by the learning algorithm, , on the floatpoint copy at iteration , , and the actual step taken due to quantizing the weights, i.e. . We measure this angle using cosine similarity (normalized dotproduct) between the instantaneous and an exponential moving average of the actual step directions with smoothing factor (Figure 1). Cosine similarity of corresponds to an fp32 network and the absence of discretizationinduced gradient noise. As bit precisions decrease, similarity decreases, signaling higher gradient noise.
These results directly show discretizationinduced gradient noise appreciably influences the finetuning and training trajectories of quantized networks. The increased noise (decreased similarity) of the 4bit case compared to the 8bit case possibly accounts for the difference in finetuning times required. Even the 8bit case is significantly below unity, possibly explaining why training from scratch has not lead to the highest performance [Jacob et al.2017].
The 4bit solution was similar to the highprecision solution
The weights of the FAQ trained 4bit network were similar to those in the fullprecision pretrained network used for its initialization (Figure 2). We define the network similarity as the cosine similarity between the networks’ weights. The average of the cosine similarity between the weights of every corresponding neuron in the two networks is very close to
, indicating that the weight vectors have not moved very far during 110 epochs of finetuning and that the 4bit network exists close to its highprecision counterpart, demonstrating that pretrained initialization strongly influenced the final network. Contrast this with the same measure when training from scratch, where the similarity between the initial weights and final weights is close to
(). The fact that the 4bit solution was close to the highprecision solution suggests that training from scratch is unnecessary.Discussion
We show here that lowprecision quantization followed by finetuning, when properly compensating for noise, is sufficient to achieve state of the art performance for networks employing 4 and 8bit weights and activations. Compared to previous work, our approach offers a major advantage in the 8bit space, by requiring only a single epoch of post quantization training to consistently exceed highprecision network scores, and a major advantage in the 4bit space by, for the first time, matching highprecision baseline scores. We find support for the idea that overcoming noise is the main challenge in successful finetuning, given sufficient capacity in a network model: longer training times, exponential learning rate decay, very low final learning rate, and larger batch sizes all seem to contribute to improving the results of finetuning.
We believe that this approach marks a major change in how lowprecision networks can be trained, particularly given the wide availability of pretrained highprecision models. We conjecture that within every region containing a local minimum for a highprecision network, there exists a subregion(s) which also contains solutions to the lower precision 4bit problem, provided that the network has sufficient capacity. The experiments reported herein provide experimental support for this conjecture. In addition, it could be argued, given that the weights of a lowprecision net can be trivially represented in higher precision, that it is likely that the lower precision regions would be found well inside the boundary defining the 32bit region, requiring additional training to move into the center (for example, see Figure 1 of [Izmailov et al.2018]). This provides a possible reason why additional training, and not merely quantization that tried to match as closely as possible the highprecision weights, would yield the best accuracy for a given level of quantization. If true, 1) this direct method of finetuning will become a standard quantization methodology, supplanting these other methods; and 2) additional techniques which better optimize highprecision networks will lead to better quantization performance as well, perhaps without any further training. We predict, for instance, that a recently proposed weight averaging method designed to move closer to the minimum [Izmailov et al.2018] will also lead to higher accuracy of the initial quantization.
We expect that other approaches will further improve quantization results. For example, layerspecific learning rates and the Adam learning algorithms achieved the best results in an early lowprecision study using binary weights [Courbariaux, Bengio, and David2015]. Optimal learning rates, and better effective learning rate adjustments over the course of training may allow the network to learn more effectively in the presence of noise, thereby improving the results of finetuning. Perhaps new training algorithms designed specifically to fight the illeffects of noise [Baskin et al.2018]
introduced by weight quantizaiton will lead to further improvements, for example, by incorporating Kalman filtering techniques for optimal noise level estimation.
It will be interesting in the future to apply finetuning to networks quantized to 2 bits. Training in the 2bit case will be more challenging given the additional noise due to quantization (Figure 2), and there may be a fundamental capacity limit with 2bit quantization, at least for the smaller networks like ResNet18. It seems more likely that larger 2bit models will be able to match the accuracy of the fullprecision models [Mishra et al.2017].
Finetuning for quantization has been previously studied. In [Zhou et al.2017], increasingly larger subsets of neurons from a pretrained network are replaced with lowprecision neurons and finetuned, in stages. This process repeats until all neurons have been replaced. The accuracy exceeds the baseline for a range of networks quantized with 5bit weights and 32bit activations. Our results here with both fixedprecision weights and activations at either 8 or 4 bits suggest that the key was finetuning, and incremental training may have been unnecessary. In [Baskin et al.2018], finetuning is employed along with a nonlinear quantization scheme during training to show successful quantization on a variety of networks. We have listed their results for 4bit weights and either 8 or 32bit activations in Table 1 for comparison. We have shown that lowprecision quantization followed by finetuning, when properly compensating for noise, is sufficient to achieve even greater accuracy when quantizing both weights and activations at 4 bits.
Finetuning is a principled approach to quantization. Ultimately, the goal of quantization is to match or exceed the validation score of a corresponding fullprecision network. We would argue that the most direct way to achieve this is to finetune the network using the original objective function used for model selection and training: training error. There is no reason to switch objective functions during quantization to one which tries to closely approximate the highprecision network parameters. Our work here demonstrates 8bit and 4bit quantized networks performing at the level of their highprecision counterparts can be created with a modest investment of training time, a critical step towards harnessing the energyefficiency of lowprecision hardware.
References
 [Baskin et al.2018] Baskin, C.; Schwartz, E.; Zheltonozhskii, E.; Liss, N.; Giryes, R.; Bronstein, A. M.; and Mendelson, A. 2018. UNIQ: uniform noise injection for the quantization of neural networks. CoRR abs/1804.10969.
 [Bengio, Léonard, and Courville2013] Bengio, Y.; Léonard, N.; and Courville, A. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
 [Cai et al.2017] Cai, Z.; He, X.; Sun, J.; and Vasconcelos, N. 2017. Deep learning with low precision by halfwave gaussian quantization. CoRR abs/1702.00953.
 [Choi et al.2018] Choi, J.; Wang, Z.; Venkataramani, S.; Chuang, P. I.; Srinivasan, V.; and Gopalakrishnan, K. 2018. PACT: parameterized clipping activation for quantized neural networks. CoRR abs/1805.06085.
 [Courbariaux and Bengio2016] Courbariaux, M., and Bengio, Y. 2016. Binarynet: Training deep neural networks with weights and activations constrained to +1 or 1. CoRR abs/1602.02830.

[Courbariaux, Bengio, and
David2015]
Courbariaux, M.; Bengio, Y.; and David, J.P.
2015.
Binaryconnect: Training deep neural networks with binary weights during propagations.
In Advances in neural information processing systems, 3123–3131.  [Esser et al.2016] Esser, S.; Merolla, P.; Arthur, J.; Cassidy, A.; Appuswamy, R.; Andreopoulos, A.; Berg, D.; McKinstry, J.; Melano, T.; Barch, D.; et al. 2016. Convolutional networks for fast, energyefficient neuromorphic computing. 2016. Preprint on ArXiv. http://arxiv. org/abs/1603.08270. Accessed 27.
 [Goodfellow et al.2016] Goodfellow, I.; Bengio, Y.; Courville, A.; and Bengio, Y. 2016. Deep learning, volume 1. MIT press Cambridge.
 [Goodfellow, Vinyals, and Saxe2014] Goodfellow, I. J.; Vinyals, O.; and Saxe, A. M. 2014. Qualitatively characterizing neural network optimization problems. arXiv preprint arXiv:1412.6544.

[He et al.2016]
He, K.; Zhang, X.; Ren, S.; and Sun, J.
2016.
Deep residual learning for image recognition.
In
Proceedings of the IEEE conference on computer vision and pattern recognition
, 770–778.  [Huang et al.2017] Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In CVPR, volume 1, 3.
 [Izmailov et al.2018] Izmailov, P.; Podoprikhin, D.; Garipov, T.; Vetrov, D.; and Wilson, A. G. 2018. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407.
 [Jacob et al.2017] Jacob, B.; Kligys, S.; Chen, B.; Zhu, M.; Tang, M.; Howard, A.; Adam, H.; and Kalenichenko, D. 2017. Quantization and training of neural networks for efficient integerarithmeticonly inference. arXiv preprint arXiv:1712.05877.
 [Jung et al.2018] Jung, S.; Son, C.; Lee, S.; Son, J.; Kwak, Y.; Han, J.J.; and Choi, C. 2018. Joint training of lowprecision neural network with quantization interval parameters. arXiv preprint arXiv:1808.05779.
 [Meka2017] Meka, R. 2017. Cs289ml: Notes on convergence of gradient descent. https://raghumeka.github.io/CS289ML/gdnotes.pdf.
 [Migacz2017] Migacz, S. 2017. Nvidia 8bit inference with tensorrt. GPU Technology Conference.
 [Mishra and Marr2017] Mishra, A., and Marr, D. 2017. Apprentice: Using knowledge distillation techniques to improve lowprecision network accuracy. arXiv preprint arXiv:1711.05852.
 [Mishra et al.2017] Mishra, A. K.; Nurvitadhi, E.; Cook, J. J.; and Marr, D. 2017. WRPN: wide reducedprecision networks. CoRR abs/1709.01134.
 [Miyashita, Lee, and Murmann2016] Miyashita, D.; Lee, E. H.; and Murmann, B. 2016. Convolutional neural networks using logarithmic data representation. CoRR abs/1603.01025.
 [Polino, Pascanu, and Alistarh2018] Polino, A.; Pascanu, R.; and Alistarh, D. 2018. Model compression via distillation and quantization. CoRR abs/1802.05668.
 [Rastegari et al.2016] Rastegari, M.; Ordonez, V.; Redmon, J.; and Farhadi, A. 2016. Xnornet: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, 525–542. Springer.
 [Sa et al.2018] Sa, C. D.; Leszczynski, M.; Zhang, J.; Marzoev, A.; Aberger, C. R.; Olukotun, K.; and Ré, C. 2018. Highaccuracy lowprecision training. CoRR abs/1803.03383.
 [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556.
 [Szegedy et al.2016] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2818–2826.
 [Wu et al.2018] Wu, S.; Li, G.; Chen, F.; and Shi, L. 2018. Training and inference with integers in deep neural networks. In International Conference on Learning Representations.
 [Xu et al.2018] Xu, Y.; Wang, Y.; Zhou, A.; Lin, W.; and Xiong, H. 2018. Deep neural network compression with single and multiple level quantization. CoRR abs/1803.03289.
 [Yin et al.2018] Yin, P.; Zhang, S.; Lyu, J.; Osher, S.; Qi, Y.; and Xin, J. 2018. Binaryrelax: A relaxation approach for training deep neural networks with quantized weights. CoRR abs/1801.06313.
 [Zhou et al.2017] Zhou, A.; Yao, A.; Guo, Y.; Xu, L.; and Chen, Y. 2017. Incremental network quantization: Towards lossless cnns with lowprecision weights. CoRR abs/1702.03044.
Comments
There are no comments yet.