DoReFa_Cifar10
implement of DoReFaNet with tensorflow based on cifar10 dataset
view repo
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.
READ FULL TEXT VIEW PDF
We introduce a method to train Quantized Neural Networks (QNNs) --- neur...
read it
Fully convolutional neural networks give accurate, per-pixel prediction ...
read it
Recently low-bit (e.g., 8-bit) network quantization has been extensively...
read it
Neural networks have shown to be a practical way of building a very comp...
read it
Recent work has shown that fast, compact low-bitwidth neural networks ca...
read it
In this paper, we study 1-bit convolutional neural networks (CNNs), of w...
read it
Significant computational cost and memory requirements for deep neural
n...
read it
implement of DoReFaNet with tensorflow based on cifar10 dataset
Ongoing work that tries to re-implement DoReFa-Net in Caffe
Recent progress in deep Convolutional Neural Networks (DCNN) has considerably changed the landscape of computer vision
(Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012a) and NLP (Bahdanau et al., 2014).However, a state-of-the-art DCNN usually has a lot of parameters and high computational complexity, which both impedes its application in embedded devices and slows down the iteration of its research and development.
For example, the training process of a DCNN may take up to weeks on a modern multi-GPU server for large datasets like ImageNet (Deng et al., 2009). In light of this, substantial research efforts are invested in speeding up DCNNs at both run-time and training-time, on both general-purpose (Vanhoucke et al., 2011; Gong et al., 2014; Han et al., 2015b) and specialized computer hardware (Farabet et al., 2011; Pham et al., 2012; Chen et al., 2014a, b). Various approaches like quantization (Wu et al., 2015) and sparsification (Han et al., 2015a) have also been proposed.
Recent research efforts (Courbariaux et al., 2014; Kim & Smaragdis, 2016; Rastegari et al., 2016; Merolla et al., 2016) have considerably reduced both model size and computation complexity by using low bitwidth weights and low bitwidth activations. In particular, in BNN (Courbariaux & Bengio, 2016)
and XNOR-Net
(Rastegari et al., 2016), both weights and input activations of convolutional layers^{1}^{1}1Note fully-connected layers are special cases of convolutional layers.are binarized. Hence during the forward pass the most computationally expensive convolutions can be done by bitwise operation kernels, thanks to the following formula which computes the dot product of two bit vectors
and using bitwise operations, where counts the number of bits in a bit vector:(1) |
However, to the best of our knowledge, no previous work has succeeded in quantizing gradients to numbers with bitwidth less than 8 during the backward pass, while still achieving comparable prediction accuracy. In some previous research (Gupta et al., 2015; Courbariaux et al., 2014), convolutions involve at least 10-bit numbers. In BNN and XNOR-Net, though weights are binarized, gradients are in full precision, therefore the backward-pass still requires convolution between 1-bit numbers and 32-bit floating-points. The inability to exploit bit convolution during the backward pass means that most training time of BNN and XNOR-Net will be spent in backward pass.
This paper makes the following contributions:
We generalize the method of binarized neural networks to allow creating DoReFa-Net, a CNN that has arbitrary bitwidth in weights, activations, and gradients. As convolutions during forward/backward passes can then operate on low bit weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both the forward pass and the backward pass of the training process.
As bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate low bitwidth neural network training on these hardware. In particular, with the power efficiency of FPGA and ASIC, we may considerably reduce energy consumption of low bitwidth neural network training.
We explore the configuration space of bitwidth for weights, activations and gradients for DoReFa-Net. E.g., training a network using 1-bit weights, 1-bit activations and 2-bit gradients can lead to 93% accuracy on SVHN dataset. In our experiments, gradients in general require larger bitwidth than activations, and activations in general require larger bitwidth than weights, to lessen the degradation of prediction accuracy compared to 32-bit precision counterparts. We name our method “DoReFa-Net” to take note of these phenomena.
We release in TensorFlow
(Abadi et al., ) format a DoReFa-Net ^{3}^{3}3The model and supplement materials are available at https://github.com/ppwwyyxx/tensorpack/tree/master/examples/DoReFa-Net derived from AlexNet (Krizhevsky et al., 2012) that gets 46.1% in single-crop top-1 accuracy on ILSVRC12 validation set. A reference implementation for training of a DoReFa-net on SVHN dataset is also available.In this section we detail our formulation of DoReFa-Net, a method to train neural network that has low bitwidth weights, activations with low bitwidth parameter gradients. We note that while weights and activations can be deterministically quantized, gradients need to be stochastically quantized.
We first outline how to exploit bit convolution kernel in DoReFa-Net and then elaborate the method to quantize weights, activations and gradients to low bitwidth numbers.
The 1-bit dot product kernel specified in Eqn. 1 can also be used to compute dot product, and consequently convolution, for low bitwidth fixed-point integers. Assume is a sequence of -bit fixed-point integers s.t. and is a sequence of -bit fixed-point integers s.t. where and are bit vectors, the dot product of and can be computed by bitwise operations as:
(3) | |||
(4) |
In the above equation, the computation complexity is , i.e., directly proportional to bitwidth of and .
The set of real numbers representable by a low bitwidth number only has a small ordinality
. However, mathematically any continuous function whose range is a small finite set would necessarily always have zero gradient with respect to its input. We adopt the “straight-through estimator” (STE) method
(Hinton et al., 2012b; Bengio et al., 2013) to circumvent this problem. An STE can be thought of as an operator that has arbitrary forward and backward operations.Here
denotes the objective function. As sampling from a Bernoulli distribution is not a differentiable function, “
” is not well defined, hence the backward pass cannot be directly constructed from the forward pass using chain rule. Nevertheless, because
is on expectation equal to , we may use the well-defined gradient as an approximation for and construct a STE as above. In other words, STE construction gives a custom-defined “”.An STE we will use extensively in this work is that quantizes a real number input to a -bit number output . This STE is defined as below:
Forward: | (5) | |||
Backward: | (6) |
It is obvious by construction that the output of STE is a real number representable by bits. Also, since is a -bit fixed-point integer, the dot product of two sequences of such -bit real numbers can be efficiently calculated, by using fixed-point integer dot product in Eqn. 3 followed by proper scaling.
In this section we detail our approach to getting low bitwidth weights.
In previous works, STE has been used to binarize the weights. For example in BNN, weights are binarized by the following STE:
Forward: | |||
Backward: |
Here returns one of two possible values: .
In XNOR-Net, weights are binarized by the following STE, with the difference being that weights are scaled after binarized:
Forward: | |||
Backward: |
In XNOR-Net, the scaling factor is the mean of absolute value of each output channel of weights. The rationale is that introducing this scaling factor will increase the value range of weights, while still being able to exploit bit convolution kernels. However, the channel-wise scaling factors will make it impossible to exploit bit convolution kernels when computing the convolution between gradients and the weights during back propagation. Hence, in our experiments, we use a constant scalar to scale all filters instead of doing channel-wise scaling. We use the following STE for all neural networks that have binary weights in this paper:
Forward: | (7) | |||
Backward: | (8) |
In case we use -bit representation of the weights with , we apply the STE to weights as follows:
Forward: | (9) | |||
Backward: | (10) |
Note here we use to limit the value range of weights to before quantizing to -bit. By construction, is a number in , where the maximum is taken over all weights in that layer. will then quantize this number to -bit fixed-point ranging in . Finally an affine transform will bring the range of to .
Next we detail our approach to getting low bitwidth activations that are input to convolutions, which is of critical importance in replacing floating-point convolutions by less computation-intensive bit convolutions.
In BNN and XNOR-Net, activations are binarized in the same way as weights. However, we fail to reproduce the results of XNOR-Net if we follow their methods of binarizing activations, and the binarizing approach in BNN is claimed by (Rastegari et al., 2016) to cause severe prediction accuracy degradation when applied on ImageNet models like AlexNet. Hence instead, we apply an STE on input activations
of each weight layer. Here we assume the output of the previous layer has passed through a bounded activation function
, which ensures . In DoReFa-Net, quantization of activations to -bit is simply:(11) |
We have demonstrated deterministic quantization to produce low bitwidth weights and activations. However, we find stochastic quantization is necessary for low bitwidth gradients to be effective. This is in agreement with experiments of (Gupta et al., 2015) on 16-bit weights and 16-bit gradients.
To quantize gradients to low bitwidth, it is important to note that gradients are unbounded and may have significantly larger value range than activations. Recall in Eqn. 11, we can map the range of activations to by passing values through differentiable nonlinear functions. However, this kind of construction does not exist for gradients. Therefore we designed the following function for -bit quantization of gradients:
Here is the back-propagated gradient of the output
of some layer, and the maximum is taken over all axis of the gradient tensor
except for the mini-batch axis (therefore each instance in a mini-batch will have its own scaling factor). The above function first applies an affine transform on the gradient, to map it into , and then inverts the transform after quantization.To further compensate the potential bias introduced by gradient quantization, we introduce an extra noise function where . ^{5}^{5}5Note here we do not need clip value of
as the two end points of a uniform distribution are almost surely never attained.
The noise therefore has the same magnitude as the possible quantization error. We find that the artificial noise to be critical for achieving good performance. Finally, the expression we’ll use to quantize gradients to -bit numbers is as follows:(12) |
The quantization of gradient is done on the backward pass only. Hence we apply the following STE on the output of each convolution layer:
Forward: | (13) | |||
Backward: | (14) |
We give a sample training algorithm of DoReFa-Net as Algorithm 1
. W.l.o.g., the network is assumed to have a feed-forward linear topology, and details like batch normalization and pooling layers are omitted. Note that all the expensive operations
forward, backward_input, backward_weight, in convolutional as well as fully-connected layers, are now operating on low bitwidth numbers. By construction, there is always an affine mapping between these low bitwidth numbers and fixed-point integers. As a result, all the expensive operations can be accelerated significantly by the fixed-point integer dot product kernel (Eqn. 3).Among all layers in a DCNN, the first and the last layers appear to be different from the rest, as they are interfacing the input and output of the network. For the first layer, the input is often an image, which may contain 8-bit features. On the other hand, the output layer typically produce approximately one-hot vectors, which are close to bit vectors by definition. It is an interesting question whether these differences would cause the first and last layer to exhibit different behavior when converted to low bitwidth counterparts.
In the related work of (Han et al., 2015b) which converts network weights to sparse tensors, introducing the same ratio of zeros in the first convolutional layer is found to cause more prediction accuracy degradation than in the other convolutional layers. Based on this intuition as well as the observation that the inputs to the first layer often contain only a few channels and constitutes a small proportion of total computation complexity, we perform most of our experiments by not quantizing the weights of the first convolutional layer, unless noted otherwise. Nevertheless, the outputs of the first convolutional layer are quantized to low bitwidth as they would be used by the consequent convolutional layer.
Similarly, when the output number of class is small, to stay away from potential degradation of prediction accuracy, we leave the last fully-connected layer intact unless noted otherwise. Nevertheless, the gradients back-propagated from the final FC layer are properly quantized.
We will give the empirical evidence in Section 3.3.
One of the motivations for creating low bitwidth neural network is to save run-time memory footprint in inference. A naive implementation of Algorithm 1 would store activations in full-precision numbers, consuming much memory during run-time. In particular, if
involves floating-point arithmetics, there will be non-negligible amount of non-bitwise operations related to computations of
.There are simple solutions to this problem. Notice that it is possible to fuse Step 3, Step 4, Step 6 to avoid storing intermediate results in full-precision. Apart from this, when is monotonic, is also monotonic, the few possible values of corresponds to several non-overlapping value ranges of , hence we can implement computation of by several comparisons between fixed point numbers and avoid generating intermediate results.
Similarly, it would also be desirable to fuse Step 11 Step 12, and Step 13 of previous iteration to avoid generation and storing of
. The situation would be more complex when there are intermediate pooling layers. Nevertheless, if the pooling layer is max-pooling, we can do the fusion as
function commutes with function:(15) |
hence again can be generated from by comparisons between fixed-point numbers.
W | A | G | Training Complexity | Inference Complexity | Storage Relative Size | Model A Accuracy | Model B Accuracy | Model C Accuracy | Model D Accuracy |
1 | 1 | 2 | 3 | 1 | 1 | 0.934^20.934 | 0.924^20.924 | 0.910^20.910 | 0.803^20.803 |
1 | 1 | 4 | 5 | 1 | 1 | 0.968^20.968 | 0.961^20.961 | 0.916^20.916 | 0.846^20.846 |
1 | 1 | 8 | 9 | 1 | 1 | 0.970^20.970 | 0.962^20.962 | 0.902^20.902 | 0.828^20.828 |
1 | 1 | 32 | - | - | 1 | 0.971^20.971 | 0.963^20.963 | 0.921^20.921 | 0.841^20.841 |
1 | 2 | 2 | 4 | 2 | 1 | 0.909^20.909 | 0.930^20.930 | 0.900^20.900 | 0.808^20.808 |
1 | 2 | 3 | 5 | 2 | 1 | 0.968^20.968 | 0.964^20.964 | 0.934^20.934 | 0.878^20.878 |
1 | 2 | 4 | 6 | 2 | 1 | 0.975^20.975 | 0.969^20.969 | 0.939^20.939 | 0.878^20.878 |
2 | 1 | 2 | 6 | 2 | 2 | 0.927^20.927 | 0.928^20.928 | 0.909^20.909 | 0.846^20.846 |
2 | 1 | 4 | 10 | 2 | 2 | 0.969^20.969 | 0.957^20.957 | 0.904^20.904 | 0.827^20.827 |
1 | 2 | 8 | 10 | 2 | 1 | 0.975^20.975 | 0.971^20.971 | 0.946^20.946 | 0.866^20.866 |
1 | 2 | 32 | - | - | 1 | 0.976^20.976 | 0.970^20.970 | 0.950^20.950 | 0.865^20.865 |
1 | 3 | 3 | 6 | 3 | 1 | 0.968^20.968 | 0.964^20.964 | 0.946^20.946 | 0.887^20.887 |
1 | 3 | 4 | 7 | 3 | 1 | 0.974^20.974 | 0.974^20.974 | 0.959^20.959 | 0.897^20.897 |
1 | 3 | 6 | 9 | 3 | 1 | 0.977^20.977 | 0.974^20.974 | 0.949^20.949 | 0.916^20.916 |
1 | 4 | 2 | 6 | 4 | 1 | 0.815^20.815 | 0.898^20.898 | 0.911^20.911 | 0.868^20.868 |
1 | 4 | 4 | 8 | 4 | 1 | 0.975^20.975 | 0.974^20.974 | 0.962^20.962 | 0.915^20.915 |
1 | 4 | 8 | 12 | 4 | 1 | 0.977^20.977 | 0.975^20.975 | 0.955^20.955 | 0.895^20.895 |
2 | 2 | 2 | 8 | 4 | 1 | 0.900^20.900 | 0.919^20.919 | 0.856^20.856 | 0.842^20.842 |
8 | 8 | 8 | - | - | 8 | 0.970^20.970 | 0.955^20.955 | ||
32 | 32 | 32 | - | - | 32 | 0.975^20.975 | 0.975^20.975 | 0.972^20.972 | 0.950^20.950 |
We explore the configuration space of combinations of bitwidth of weights, activations and gradients by experiments on the SVHN dataset.
The SVHN dataset (Netzer et al., 2011) is a real-world digit recognition dataset consisting of photos of house numbers in Google Street View images. We consider the “cropped” format of the dataset: 32-by-32 colored images centered around a single character. There are 73257 digits for training, 26032 digits for testing, and 531131 less difficult samples which can be used as extra training data. The images are resized to 40x40 before fed into network.
For convolutions in a DoReFa-Net, if we have -bit weights, -bit activations and -bit gradients, the relative forward and backward computation complexity, storage relative size, can be computed from Eqn. 3 and we list them in Table 1. As it would not be computationally efficient to use bit convolution kernels for convolutions between 32-bit numbers, and noting that previous works like BNN and XNOR-net have already compared bit convolution kernels with 32-bit convolution kernels, we will omit the complexity comparison of computation complexity for the 32-bit control experiments.
We use the prediction accuracy of several CNN models on SVHN dataset to evaluate the efficacy of configurations. Model A is a CNN that costs about 80 FLOPs for one 40x40 image, and it consists of seven convolutional layers and one fully-connected layer.
Model B, C, D is derived from Model A by reducing the number of channels for all seven convolutional layers by 50%, 75%, 87.5%, respectively. The listed prediction accuracy is the maximum accuracy on test set over 200 epochs. We use ADAM
(Kingma & Ba, 2014) learning rule with 0.001 as learning rate.In general, having low bitwidth weights, activations and gradients will cause degradation in prediction accuracy. But it should be noted that low bitwidth networks will have much reduced resource requirement.
As balancing between multiple factors like training time, inference time, model size and accuracy is more a problem of practical trade-off, there will be no definite conclusion as which combination of one should choose. Nevertheless, we find in these experiments that weights, activations and gradients are progressively more sensitive to bitwidth, and using gradients with would significantly degrade prediction accuracy. Based on these observations, we take = and as rational combinations and use them for most of our experiments on ImageNet dataset.
Table 1 also shows that the relative number of channels significantly affect the prediction quality degradation resulting from bitwidth reduction. For example, there is no significant loss of prediction accuracy when going from 32-bit model to DoReFa-Net for Model A, which is not the case for Model C. We conjecture that “more capable” models like those with more channels will be less sensitive to bitwidth differences. On the other hand, Table 1 also suggests a method to compensate for the prediction quality degradation, by increasing bitwidth of activations for models with less channels, at the cost of increasing computation complexity for inference and training. However, optimal bitwidth of gradient seems less related to model channel numbers and prediction quality saturates with 8-bit gradients most of the time.
We further evaluates DoReFa-Net on ILSVRC12 (Deng et al., 2009) image classification dataset, which contains about 1.2 million high-resolution natural images for training that spans 1000 categories of objects. The validation set contains 50k images. We report our single-crop evaluation result using top-1 accuracy. The images are resized to 224x224 before fed into the network.
The results are listed in Table 2. The baseline AlexNet model that scores 55.9% single-crop top-1 accuracy is a best-effort replication of the model in (Krizhevsky et al., 2012), with the second, fourth and fifth convolutions split into two parallel blocks. We replace the Local Contrast Renormalization layer with Batch Normalization layer (Ioffe & Szegedy, 2015). We use ADAM learning rule with learning rate at the start, and later decrease learning rate to and consequently when accuracy curves become flat.
From the table, it can be seen that increasing bitwidth of activation from 1-bit to 2-bit and even to 4-bit, while still keep 1-bit weights, leads to significant accuracy increase, approaching the accuracy of model where both weights and activations are 32-bit. Rounding gradients to 6-bit produces similar accuracies as 32-bit gradients, in experiments of “1-1-6” v.s. “1-1-32”, “1-2-6” v.s. “1-2-32”, and “1-3-6” v.s. “1-3-32”.
The rows with “initialized” means the model training has been initialized with a 32-bit model. It can be seen that there is a considerable gap between the best accuracy of a trained-from-scratch-model and an initialized model. Closing this gap is left to future work. Nevertheless, it show the potential in improving accuracy of DoReFa-Net.
W | A | G | Training Complexity | Inference Complexity | Storage Relative Size | AlexNet Accuracy |
1 | 1 | 6 | 7 | 1 | 1 | 0.395 |
1 | 1 | 8 | 9 | 1 | 1 | 0.395 |
1 | 1 | 32 | - | 1 | 1 | 0.279 (BNN) |
1 | 1 | 32 | - | 1 | 1 | 0.442 (XNOR-Net) |
1 | 1 | 32 | - | 1 | 1 | 0.401 |
1 | 1 | 32 | - | 1 | 1 | 0.436 (initialized) |
1 | 2 | 6 | 8 | 2 | 1 | 0.461 |
1 | 2 | 8 | 10 | 2 | 1 | 0.463 |
1 | 2 | 32 | - | 2 | 1 | 0.477 |
1 | 2 | 32 | - | 2 | 1 | 0.498 (initialized) |
1 | 3 | 6 | 9 | 3 | 1 | 0.471 |
1 | 3 | 32 | - | 3 | 1 | 0.484 |
1 | 4 | 6 | - | 4 | 1 | 0.482 |
1 | 4 | 32 | - | 4 | 1 | 0.503 |
1 | 4 | 32 | - | 4 | 1 | 0.530 (initialized) |
8 | 8 | 8 | - | - | 8 | 0.530 |
32 | 32 | 32 | - | - | 32 | 0.559 |
Figure 1 shows the evolution of accuracy v.s. epoch curves of DoReFa-Net. It can be seen that quantizing gradients to be 6-bit does not cause the training curve to be significantly different from not quantizing gradients. However, using 4-bit gradients as in “1-2-4” leads to significant accuracy degradation.
Figure 2 shows the histogram of gradients of layer “conv3” of “1-2-6” AlexNet model at epoch 5 and 35. As the histogram remains mostly unchanged with epoch number, we omit the histograms of the other epochs for clarity.
Figure 3(a) shows the histogram of weights of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. Though the scale of the weights changes with epoch number, the distribution of weights are approximately symmetric.
Figure 3(b) shows the histogram of activations of layer “conv3” of “1-2-6” AlexNet model at epoch 5, 15 and 35. The distributions of activations are stable throughout the training process.
To answer the question whether the first and the last layer need to be treated specially when quantizing to low bitwidth, we use the same models A, B, C from Table 1 to find out if it is cost-effective to quantize the first and last layer to low bitwidth, and collect the results in Table 3.
It can be seen that quantizing first and the last layer indeed leads to significant accuracy degradation, and models with less number of channels suffer more. The degradation to some extent justifies the practices of BNN and XNOR-net of not quantizing these two layers.
Scheme | Model A Accuracy | Model B Accuracy | Model C Accuracy |
---|---|---|---|
(1, 2, 4) | 0.975 | 0.969 | 0.939 |
(1, 2, 4) + first | 0.972 | 0.963 | 0.932 |
(1, 2, 4) + last | 0.973 | 0.969 | 0.927 |
(1, 2, 4) + first + last | 0.971 | 0.961 | 0.928 |
By binarizing weights and activations, binarized neural networks like BNN and XNOR-Net have enabled acceleration of the forward pass of neural network with bit convolution kernel. However, the backward pass of binarized networks still requires convolutions between floating-point gradients and weights, which could not efficiently exploit bit convolution kernel as gradients are in general not low bitwidth numbers.
(Lin et al., 2015) makes a step further towards low bitwidth gradients by converting some multiplications to bit-shift. However, the number of additions between high bitwidth numbers remains at the same order of magnitude as before, leading to reduced overall speedup.
There is also another series of work (Seide et al., 2014) that quantizes gradients before communication in distributed computation settings. However, the work is more concerned with decreasing the amount of communication traffic, and does not deal with the bitwidth of gradients used in back-propagation. In particular, they use full precision gradients during the backward pass, and quantize the gradients only before sending them to other computation nodes. In contrast, we quantize gradients each time before they reach the selected convolution layers during the backward pass.
To the best of our knowledge, our work is the first to reduce the bitwidth of gradient to 6-bit and lower, while still achieving comparable prediction accuracy without altering other aspects of neural network model, such as increasing the number of channels, for models as large as AlexNet on ImageNet dataset.
We have introduced DoReFa-Net, a method to train a convolutional neural network that has low bitwidth weights and activations using low bitwidth parameter gradients. We find that weights and activations can be deterministically quantized while gradients need to be stochastically quantized.
As most convolutions during forward/backward passes are now taking low bitwidth weights and activations/gradients respectively, DoReFa-Net can use the bit convolution kernels to accelerate both training and inference process. Our experiments on SVHN and ImageNet datasets demonstrate that DoReFa-Net can achieve comparable prediction accuracy as their 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1% top-1 accuracy on ImageNet validation set.
As future work, it would be interesting to investigate using FPGA to train DoReFa-Net, as the resource requirement of computation units for -bit arithmetic on FPGA strongly favors low bitwidth convolutions.
Tensorflow: Large-scale machine learning on heterogeneous systems, 2015.
Software available from tensorflow. org.Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
, pp. 248–255. IEEE, 2009.NIPS workshop on deep learning and unsupervised feature learning
, volume 2011, pp. 5. Granada, Spain, 2011.1-bit stochastic gradient descent and its application to data-parallel distributed training of speech dnns.
In INTERSPEECH, pp. 1058–1062, 2014.
Comments
There are no comments yet.