Neural network quantization [3, 4, 12, 15, 21, 22, 23, 25, 31, 32] has attracted large amount of attention due to the resource and latency constraints in real applications. Recent progress on neural network quantization has shown that the performance of quantized models can be as good as full precision models under moderate target bit-width such as 4 bits . Customized hardwares can be configured to support multiple bit-widths for neural networks . In order to fully exploit the power of model quantization, mixed precision quantization strategies are proposed to strike a better balance between computation cost and model accuracy. With more flexibility to distribute the computation budgets across layers [4, 12, 25], or even weight kernels , the quantized models with mixed precision usually achieve favorable performance than the ones with uniform precision.
Current approaches for mixed precision quantization usually borrow ideas from neural architecture search (NAS) literature. Suppose we have a neural network with each convolution layer consisting of branches where each branch is the quantized convolution with different bit-width. Finding the best configuration for a mixed precision model can be achieved by preserving a single branch for each convolution layer and pruning all other branches, which is conceptually equivalent to some recent NAS algorithms that aim at searching sub-networks from a supergraph [2, 20, 24, 26]. ENAS  and SNAS 
employ reinforcement learning (RL) to learn a policy to sample network blocks from a supergraph. ReLeQ and HAQ  follow this footprint and employ reinforcement learning to choose layer-wise bit-width configurations for a neural network. AutoQ  further optimizes bit-width of each convolution kernel using a hierarchical RL strategy. ProxylessNAS  and FBNet  adopt a path sampling method to jointly learn model weights and importance scores of each operation in the supergraph. DNAS  directly reuses this path sampling methods and adds a regularization term proportional to the computation cost or model size, in order to discover mixed precision models with a good trade-off between computational resources and accuracy. Uniform Sampling (US) 
uses uniform sampling to sample subnetworks from the supergraph in training and then searches for pruned or quantized models using evolutionary algorithm.
|support kernel-wise quantization||✗||✗||✓||✗||✗||✗||✓|
|support channel pruning||✗||✗||✓||✗||✓||✗||✓|
However, previous approaches on mixed precision quantization mostly directly adopts NAS algorithms and do not leverage specific properties of quantized models. Different from NAS and model pruning, the quantitative difference of weights and activations with similar bits is small. For example, choosing 4 or 5 bits for one weight matrix only generates around
difference in value, assuming weights are uniformly distributed onwith linear quantization scheme. Thus the transition from one bit to its neighboring bits can be considered as a differentiable operation with appropriate parameterization. Recently, DQ 
utilizes the Straight-Through Estimation
to facilitate differentiable bit-switching by treating bit-width of each layer as continuous parameters. Here, we propose a new approach to treat the bit-widths as continuous values by interpolating quantized weights or activation values of its two nerighboring bit-widths. Such an approach facilitates an efficient one-shot differentiable optimization procedure of mixed precision quantization. By allocating differentiable bit-widths to layers or kernels, it can enable both layer-wise and kernel-wise quantization. A high-level comparison of our methods and previous mixed precision methods is shown in Table1.
In summary, our contribution of this work is threefold.
We propose a fractional bit-widths formulation which creates a smooth transition between neighboring quantized bits of network weights and activations, facilitating differentiable search in the layer-wise or kernel-wise precision dimension.
Our mixed precision quantization algorithm only needs one-shot training of the network, greatly reduces exploration cost for resource restrained tasks.
Our simple and straight-forward formulation is ready to be used for different quantization schemes. We showed superior performance than uniform precision approaches and previous mixed precision approaches on a wide range of model variants and with different quantization schemes.
2 Related Work
Quantized Neural Networks Previous quantization techniques can be categorized into two types. The first type named post-training quantization directly quantizes weights and activations of a pretrained full-precision model into lower bit [13, 18]. This type of methods typically suffer from significant performance degeneration, as the training progress is ignorant of the quantization procedure. Another type of techniques named quantization-aware training is proposed to incorporate quantization into training stage. Early studies in this direction employ a single precision for the whole neural network. For example, DoReFa 
proposes to transform the unbounded weights into a finite interval to reduce undesired quantization error introduced by infrequent large outliers. PACT investigates the effect of clipping activations from different layers, finding the layer-dependence of the optimal clipping-levels. SAT  investigates the gradient scales in training with quantized weights, and further improves model performance by adjusting weight scales. As another direction, some work assigns different bit-widths to different layers or kernels, enabling more flexible computation budget allocation. The first attempts employ reinforcement learning technique with rewards from estimated memory and computational cost by formulas  or simulators . AutoQ  modifies the training procedure into a hierarchical strategy, resulting in fine-grained kernel-wise quantization. However, these RL strategies needs to sample and train a large number of model variants which is very resource-demanding. DNAS  resorts to a differentiable strategy by constructing a supernet with each layer comprised by a linear combination of outputs from different bit-widths. However, due to the discrepancy between the search process and final configuration, it still needs to retrain the discovered model candidates. To further improve the searching efficiency, we propose a one-shot differentiable search method with fractional bit-widths. Due to the smooth transition between fractional bit-width and final integer bit-width, our method embeds the bit-width searching and model finetuning stages in a single pass of model training. Meanwhile, our technique supports kernel-wise quantization with channel pruning in the same framework by assigning 0 bit to the pruned channels, similar to  but through a differentiable approach with much reduced searching cost. It is also orthogonal to Uniform Sample (US)  for joint quantization and pruning, which trains a supernet by uniform sampling and searches good sub-architectures with evolutionary algorithm.
Network Pruning Network pruning is an orthogonal approach to speed up inference of neural networks to quantization. Early work  compresses bulky models by learning connection together with weights, which produces unstructured connection in the final network. Later, structured compression by kernel-wise  or channel-wise [5, 9, 14, 27] pruning is proposed, where the learned architecture is more friendly with acceleration on modern hardware. As an example, 
identifies and prunes insignificant channels in each layer by penalizing on the scaling factor of the batch normalization layer. More recently, NAS algorithms are leveraged to guide network pruning. presents a one-shot searching algorithm by greedily slimming a pretrained slimmable neural network .  proposes a one-shot resource-aware searching algorithm using FLOPs as a L1 regularization term on the scaling factor of the batch normalization layer. We adopt a similar strategy to use BitOPs and model sizes as L1 regularization which are computed based on the trainable fractional bit-widths in our framework.
3 Mixed Precision Quantization
In this section, we will introduce our proposed method for mixed precision quantization. Our one-shot training pipeline involves two steps: bit-width searching and finetuning. We first introduce the implementation of fractional bit-width, and integration of the resource constraint in the searching process. After that, we introduce implementation of kernel-wise mixed precision jointly with channel pruning.
3.1 Searching with fractional bit-widths
In order to learn bit-widths dynamically in one-shot training, it is necessary to make them differentiable and define their derivative accordingly. To this end, we first examine a generic operation that quantizes a value to -bit. Typically, is well-defined only for positive integer values of . To generalize bit-width to an arbitrary positive real number , we apply first-order expansion around one of its nearby integer, and approximate the derivative at this integer by the slope of the segment joining the two adjacent grid points neighboring . Such a linear interpolation reads
where and denote the floor and ceiling function, respectively. In other words, we can approximate an operation with a fractional bit-width by a linear combination of two operations with integer bit-widths, thus naturally achieving differentiability on it and making it learnable through typical gradient-based optimization, such as SGD. Note that the approximation in Eq. (1) turns into a strict equality if the original operation is linear in or if takes an integer value. The basic idea is illustrated in Fig. 1. In Eq. (1), the two rounding functions floor and ceiling on bit-width has vanishing gradient with respect to the argument, and thus the derivative of Eq. (1) is given by
The difference of such an linear interpolation scheme compared to the widely-adopted straight through estimation (STE)  is that it uses soft bit-widths in both forward and backward propagation, rather than hard bit-widths in forward and soft bit-widths in back-propagation, as adopted by . In this way, the computed gradient reflects the true direction that the network parameters need to evolve along which results in better convergence.
Throughout we will adopt the DoReFa scheme for weight quantization, and the PACT scheme for activation quantization. The quantization function for both is the same, defined as
where , indicates rounding to the nearest integer, and equals where is the quantization bit-width. Thus, for both quantization, we have for integer bit-widths, and quantization with fractional bit-widths is implemented with Eq. (1). The weight quantization is given by , where is the transformed weight clamped to the interval ; activation quantization is given by , where is a learnable parameter and is the original activation clipped at . and are the learnable fractional bit-widths for weight and activation, respectively. Also, it is possible to privatize bit-width to each kernel, enabling kernel-wise mixed precision quantization, as discussed later in Section 3.4.
During the earlier searching stage, the precision assigned to each layer or each kernel is still undetermined, and we want to find the optimal bit-width structure through training. By initializing each bit-width with some arbitrary value, we can use Eq. (1) to quantize weights and activations in the model to fractional bit-widths. Meanwhile, this allows us to assign different bit-widths to different layers or even kernels, as well as to furnish separate precision for weight and activation quantization. During the training process, the model gradually converges to an optimal bit-width for both weight and activation corresponding to each unit, enabling quantization with mixed precision.
3.2 Resource constraint as penalty loss
Restricting storage or computation cost is essential for model quantization, as the original purpose of quantization is to save resource consumption when deploying bulky model on portable devices or embedded systems. To this end, previous work resort to constraining on different metrics during the optimization procedure, including memory footprints , model size [22, 23], BitOPs [7, 25] and even estimated latency or energy [15, 23]. Here, we focus on model size in bits (Bytes) for weight-only quantization, and the number of BitOPs for quantization on both weight and activation, as they can be directly calculated from assigned bit-widths. Note latency and energy consumption [15, 23] may seem to be more practical measures for real applications. However, we argue that BitOPs can also be a good metric since it is solely determined by the model itself rather than different configurations of hardwares, simulators and compilers, which guarantees fair comparison between different approaches and advocates reproducible research.
Weight-only quantization targets at shrinking the model size, while floating point operation is still needed during inference. Model size are usually expressed in terms of the required number of bits to store weights (and bias) in the model. For a weight of -bit, the size is simply . The generalized model size for a fractional bit-width is thus . The size of the whole model can be obtained by summing over all weights in the model. Note that the bit-width can be shared among all weights in the whole layer or along each kernel (as discussed later in Section 3.4), corresponding to layer-wise or kernel-wise quantization, respectively. For example, for a typical 2D convolution layer (without grouping) sharing the same fractional bit-width among all weights, the size is given by , where is the number of input channels, is the number of output channels, and and represent the horizontal and vertical kernel sizes, respectively.
Quantization on both weights and activations can effectively decrease computation cost for real application, which can be measured with number of BitOPs involved in multiplications. Suppose a weight value and an activation value involved in multiplication are quantized to -bit and -bit, respectively. The number of BitOPs for such a multiplication is
This expression is bi-linear in and , which means that for fractional bit-widths and , Eq. (1) leads to
The total computation cost of the model is the sum over all weights and activations. As for the example of 2D convolution layer, if all weights share the same fractional bit-width and all input activations share the same fractional bit-width , the number of BitOPs is given by , where and represents the horizontal and vertical sizes of the output features, respectively.
Targeting prescribed objective With constraints defined properly, we are able to penalize on them to enable constraint-aware optimization. Here, we directly define the penalty term as the L1 difference from some target constraint value by
where and denote target constraints for model size and computation cost, respectively. The sum is taken over all weights in the model for model size constrained optimization, and is taken over all weights and all activations for computation cost constrained case.
Adding the penalty term to the original loss (such as cross entropy for classification task) with a coefficient , we arrive at the total loss for optimization
It should be noted that the value of depends on the unit of constraints. Throughout the paper, we measure model size in terms of MB (megabytes) and computation cost in terms of GBitOPs (billion of BitOPs). In this way, the desired resource constraint can be reached in the joint optimization of model parameters and bit-widths. Note that the recent concurrent work  adopts a similar approach for mixed precision quantization with L1 regularzation on bit-widths for weights and activations, while here we explicitly define the loss as a function of computational cost in BitOPs or model size in Bytes and incorporate the target constraint into the loss directly.
3.3 Finetuning with mixed precision
After searching, we freeze the bit-widths by rounding them to the nearest integer values and disabling their gradient. This way, each layer or each kernel has its individual bit-widths for weights and activations learned in the previous stage, and the training enters the finetuning stage to only update model weights. The ratio between training epochs allocated to searching and finetuning is a hyper-parameter that can be freely specified. In practice, we assignof training epoches to searching and to finetuning. Here we want to emphasize that the combination of searching and finetuning constitutes the whole training procedure, and the total number of epochs of the two stages is the same as a traditional quantization-aware training procedure. Thus, our training method is one-shot, without extra retraining steps.
3.4 Kernel-wise mixed precision quantization
As mentioned above, our algorithm is not restricted to layer-wise quantization, but also supports kernel-wise quantization. Here, one kernel means weight parameters associated with a convolution filter to produce a single-channel feature map. Weight kernels in a convolution layer are assigned with different bit-width parameters , where
is the index of the weight kernel. For each convolution operation of one weight kernel with the input tensor, the input tensor can also be assigned with different bit-widths. However, quantizing the input tensor with different bit-widths for different weight kernels requires large computation overhead. Here we the same bit-widthon the input tensor for computation with all the weight kernels. Note that  adopted the same strategy for kernel-wise quantization. For a 2D convolution layer, the number of BitOPs associated with the fractional bit-width is given by . And model size can be represented as .
3.5 Network pruning through quantization with 0-bit
The flexibility and differentiability of bit-width enables not only channel-wise quantization, but also channel pruning with quantization. To this end, in addition to generalize bit-width to fractional values, we add the definition of 0-bit for weight quantization, in which case we modify the definition in Eq. (3) to
In this case, weights with 0-bit will be quantized to 0, and the subsequent output channel can also be removed without affecting the network, which is essentially network pruning. Thus, by allowing 0-bit for weights together with channel-wise quantization, channel-pruning can be performed jointly with quantization. In practice, 0-bit is added as one candidate bit to the bit-width list of weight matrix. Compared with  which adopts a similar strategy with 0 bit for pruning, our method is differentiable on the bit-width including 0 bit to achieve one-shot mixed precision quantization and pruning. We conduct experiments for this joint optimization of pruning and quantization in Section 4.3.
In this section, we conduct quantitative experiments using FracBits and compare it with previous quantization approaches including uniform quantization algorithms PACT , LQNet , SAT  and mixed precision quantization algorithms HAQ , AutoQ , DNAS , US , DQ . We first compare our method with previous approaches on layer-wise mixed precision quantization in Section 4.2. Then we compare our method with a previous kernel-wise mixed precision method AutoQ on kernel-wise precision search. Finally, we conduct an ablation study on the hyper-parameters and configurations.
4.1 Implementation details
We build our algorithms based on recent quantization algorithms PACT  and SAT . PACT jointly learns quantized weights and activations where weights are quantized using the DoReFa scheme . SAT is an improved version of PACT algorithm with gradient calibration and scale adjusting. is a critical parameter for the proper convergence of the network towards required resource constraints. Models under mild or aggressive constraints may couple with different values of . Different types of resource constraints (computational cost and model size) have different scales and requires different scales of the regularization term. However, in our experiments, we find our algorithm is not very sensitive to values of . We set to for all computation cost constrained experiments, and for all model size constrained experiments. We also find it beneficial to initialize the model at some point close to the target resource constraint, facilitating more exploration close to the target model spaces. We control the initial state with the fractional bits in each layer, and set it to for all the experiments, where is the bit-width achieving similar resource constraints in the corresponding uniformly quantized model. For all experiments with weights and activations both quantized, we set the candidate bit-widths to be 2-8. For all experiments with only weights quantized, we set the candidate bit-widths to be 1-8. For kernel-wise quantization experiments, we also add 0 and 1 bits to the candidate bit-widths for weights to allow channel pruning as described in Section 3.4. Since the first and the last layers in a neural network have crucial impact on the performance of the model, we fix the bit-width of the first and last layer to 8 bit following .
For all experiments, we use cosine learing rate scheduler without restart. Learning rate is initially set to 0.05 and updated every iteration for totally 150 epochs. We use SGD optimizer with a momentum weight of 0.9 without damping, and weight decay of . The batch size is set to 2048 for all models. The warmup strategy suggested in  is also adopted by linearly increasing the learning rate every iteration to for the first five epochs before using the cosine annealing scheduler. Bit-width search is conducted in the first 120 epochs after the warmup stage. At the 121th epoch, all fractional bit-width will be rounded to integer bits, and the network will be further finetuned for the rest 30 epoches. This rounding process gives a sudden change to the network, but we do not observe any glitch in the training loss, potentially due to the insignificant difference in quantized values of two neighboring bit-widths. For kernel-wise precision quantization, we initialize the model from their layer-wise precision counterparts, which stabilizes the process of kernel-wise bit-width search. We adopt this strategy for all of our kernel-wise precision models.
|MobileNet V1||PACT ||62.6||84.1||5.73||70.3||89.2||9.64||71.1||89.6||14.66|
|MobileNet V2||PACT ||67.0||87.0||3.32||70.6||89.2||5.35||71.2||89.8||7.96|
4.2 Quantization with layer-wise precision
We compare FracBits with previous quantization algorithms on layer-wise precision search. We conducted experiments on MobileNet V1/V2 and ResNet18. Since FracBits can be used for both computation cost constrained and model size constrained bit-width search, we conduct experiments on both settings to validate the effectiveness of our approach.
Table 2 shows experiment results of layer-wise computation cost constrained quantization on MobileNet V1/V2. We report result of our method with two qunatization schemes PACT and SAT, and denote the two variants as FracBits-PACT and FracBits-SAT. The previous methods HAQ  and AutoQ  use PACT as quantization scheme, while DQ uses a similar scheme to PACT with learnable clipping bounds. FracBits-PACT outperforms HAQ on both MobileNet V1 and V2, and outperforms AutoQ and DQ on MobileNet V2. SAT is a strong uniform quantization baseline which already outperforms all previous mixed precision methods. For example, it already achieves 71.9% on 5-bit MobileNet V1 and 72.1% on 5-bit MobileNet V2, almost closing the gap between full precision models and quantized ones. We believe that validating the effectiveness of our FracBits algorithm based on SAT is helpful towards seeking the limit of mixed precision quantization algorithms. FracBits-SAT achieves slightly better performance compared to SAT on 4- and 5-bit MobileNet V1/V2, and significantly better result on 3-bit models, which proves its effectiveness on strong uniform quantization baselines. It has a absolute gain on 3-bit MobileNet V1 and a gain on MobileNet V2. Note the BitOPs of models using FracBits is slightly higher than the resource target, mostly within upper range of the BitOPs constraint. This is due to the straight-forward rounding operation to discretize the bit-width which does not optimize bit-width allocation according to the resource constraint. More sophisticated method such as integer programming could be used in the bit-width discretization step to enforce tight resource constraint, which we leave as future work.
We show comparison with more algorithms on ResNet18, which is shown in Table 3. Here we compare with uniform precision approaches PACT, LQNet and mixed precision approaches DNAS, DQ, AutoQ, and US. Except DQ, all mixed precision approaches use PACT as quantization scheme. Since all methods report different accuracies for full precision (FP) models, we also add the top-1 accuracy of FP models reported in corresponding papers and report the relative accuracy drop for each method. Comparing absolute accuracy, FracBits-PACT achieves comparable performance as state-of-the-art mixed precision methods. Note DNAS uses several tricks in training to boost performance, thus its result is not directly comparable to others. Comparing relative accuracy drop, our method achieves least performance drop on 3-bit ResNet18, and is among the top 2 with least drop on 4-bit ResNet18. Enhanced by SAT quantization method, FracBits-SAT further improves over SAT baseline and achieves only accuracy drop on 3-bit ResNet18 and a performance gain on 4-bit ResNet18.
To have a more intuitive understanding of the learned bit-width structure from our algorithm, we plot the bit-widths from different layers for 3-bit MobileNet V2 and ResNet18, as shown in Fig. 2. We find that models for mixed quantization contrained on computational cost generally uses more bit-width on the late stage of the network. Also, in MobileNet V2, depth-wise convolution results in more bit-width than point-wise due to their low computation cost.
|MobileNet V1||DeepComp ||37.6||64.3||-||65.9||86.9||-||71.1||89.8||-|
|MobileNet V2||DeepComp ||58.1||82.2||-||68.0||88.0||-||71.2||89.9||-|
For model size constrained quantization, we show comparison with previous methods Deep Compression , HAQ and uniform quantization approach SAT. Our FracBits-SAT outperforms mixed precision methods HAQ and strong uniform quantization baseline SAT on all experimented bit-widths consistently. Notable, FracBits has a absolute gain on top-1 accuracy over SAT on 2-bit MobileNet V1/V2. On the challenging 4-bit setting where quantized models already achieve similar performance as full precision ones, FracBits also outperforms SAT with a margin on MobileNet V1 and a gain on MobileNet V2 in top-1 accuracy.
|MobileNet V2||AutoQ ||-||-||-||70.8||90.3||-|
4.3 Quantization with kernel-wise precision
In this section, we experiment with quantization on kernel-wise precision. Among previous approaches, only AutoQ  has experiments on kernel-wise precision which we will compare with. In Table 5, we denote kernel-wise FracBits based on PACT and SAT as FracBits-PACT-K and Fracbits-SAT-K, and compare them with AutoQ and uniform precision method SAT. FracBits-PACT-K achieve slightly better results than AutoQ on MobileNet V2 and ResNet18, validating the effectiveness of our one-shot differentiable approach compared to complex RL based method. FracBits-SAT-K outperforms SAT significantly with and increase on top-1 accuracy on 3 and 4-bit MobileNet V2 respectively, and with and increase on 3 and 4-bit ResNet18, respectively. Compared to layer-wise precision counterparts, FracBits-SAT-K outperforms FracBits-SAT by and on 3 and 4-bit MobileNet V2, respectively. It also outperforms layer-wise FracBits-SAT by on 4-bit ResNet18, proving kernel-wise quantization can further improve over strong layer-wise mixed-precision models. Fig. 3 illustrates the bit-width distribution against layer indices for 3-bit MobileNet V2 and ResNet18. We can see that 3-bit MobileNet V2 has a bunch of pruned weight kernels in the early layers and intermediate bottleneck layers, while 3-bit ResNet-18 almost does not have pruned kernels. We believe that the point-wise convolutions in MobileNet V2 have much larger computation cost compared to depth-wise convolutions thus they receive a larger resource penalty during optimization, which leads to more pruned kernels.
4.4 Ablation Study
We show some ablation study related to our method in this section. Since our framework is clean and only involves one hyper-parameter . We show a comparative study of using different values of . Another variant we can compare with is using stochastic bit-width following  instead of determined fractional bits in the searching stage. Towards this end, we utilize gumbel softmax  to generate stochastic bit-widths based on the original fractional bit-widths. is set to the same value as the deterministic approach and temperature for gumbel softmax is set to 1. The results are show in Table 6. With smaller value of at 0.05, FracBits-SAT yields a large discrepancy from the desired BitOps on 3-bit MobileNet V1, meaning small values of may fail to reach the desired resource constraint due to weak penalty. With larger value of 0.2, the models still perform similarly as with , proving the robustness of our method within a proper range of . We have also experimented with as large as which results in a rapid descend of bit-widths values in the beginning of training and generates poor result. With gumbel softmax, the result is slightly worse than the original FracBits-SAT, proving the advantage of our deterministic approach. Also we notice that model with gumbel softmax does not meet the desired computation budget in 3 and 4-bit models.
We propose a new formulation named FracBits for mixed precision quantization. We formulate the bit-width of each layer or kernel with a continuous learnable parameter that can be instantiated by interpolating quantized parameters of two neighboring bit-widths. Our method facilitates differentiable optimization of layer-wise or kernel-wise bit-width in a single shot of training, which can further be combined with channel pruning by formulating a pruned channel with 0 bit quantization. With only a regularized term to penalize extra computational resource in the training process, our method is able to discover proper bit-width configurations for different models, outperforming previous mixed precision and uniform precision approaches. We believe our method will motivate research along low-precision neural networks, and low-cost computational models.
Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Cited by: §1, §3.1.
-  (2018) Proxylessnas: direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332. Cited by: §1.
-  (2018) Pact: parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085. Cited by: §1, §2, §4.1, Table 2, Table 3, §4.
-  (2018) ReLeQ: A reinforcement learning approach for deep quantization of neural networks. In NuerIPS, Cited by: Table 1, §1, §1, §2.
-  (2018) Morphnet: fast & simple resource-constrained structure learning of deep networks. In , pp. 1586–1595. Cited by: §2.
-  (2017) Accurate, large minibatch SGD: training imagenet in 1 hour. CoRR abs/1706.02677. External Links: Cited by: §4.1.
-  (2019) Single path one-shot neural architecture search with uniform sampling. CoRR abs/1904.00420. External Links: Cited by: Table 1, §1, §2, §3.2, Table 3, §4.
-  (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: §2, §4.2, Table 4.
-  (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. Cited by: §2.
-  (2016) Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. Cited by: §4.4.
-  (2019) AdaBits: neural network quantization with adaptive bit-widths. arXiv preprint arXiv:1912.09666. Cited by: §1.
-  (2019) Towards efficient training for neural network quantization. arXiv preprint arXiv:1912.10207. Cited by: §1, §2, §4.1, Table 2, Table 3, Table 4, Table 5, §4.
-  (2018) Quantizing deep convolutional networks for efficient inference: a whitepaper. arXiv preprint arXiv:1806.08342. Cited by: §2.
-  (2017) Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744. Cited by: §2.
AutoQB: automl for network quantization and binarization on mobile devices. CoRR abs/1902.05690. External Links: Cited by: Table 1, §1, §1, §2, §3.2, §3.4, §3.5, §4.2, §4.3, Table 2, Table 3, Table 5, §4.
-  (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066. Cited by: §2.
-  (2019) AtomNAS: fine-grained end-to-end neural architecture search. arXiv preprint arXiv:1912.09640. Cited by: §2.
-  (2019) Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1325–1334. Cited by: §2.
-  (2020) BitPruning: learning bitlengths for aggressive and accurate quantization. arXiv preprint arXiv:2002.03090. Cited by: §3.2.
-  (2018) Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Cited by: §1.
-  (2016) Xnor-net: imagenet classification using binary convolutional neural networks. In European conference on computer vision, pp. 525–542. Cited by: §1.
-  (2020) MIXED precision dnns: all you need is a good parametrization. ICLR. Cited by: Table 1, §1, §1, §3.1, §3.2, Table 2, Table 3, §4.
-  (2019) Haq: hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8612–8620. Cited by: Table 1, §1, §1, §2, §3.2, §4.2, Table 2, Table 4, §4.
-  (2019) Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734–10742. Cited by: §1.
-  (2018) Mixed precision quantization of convnets via differentiable neural architecture search. arXiv preprint arXiv:1812.00090. Cited by: Table 1, §1, §1, §2, §3.2, §4.4, Table 3, §4.
-  (2018) SNAS: stochastic neural architecture search. arXiv preprint arXiv:1812.09926. Cited by: §1.
-  (2018) Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. arXiv preprint arXiv:1802.00124. Cited by: §2.
-  (2019) Network slimming by slimmable networks: towards one-shot architecture search for channel numbers. CoRR abs/1903.11728. External Links: Cited by: §2.
-  (2018) Slimmable neural networks. arXiv preprint arXiv:1812.08928. Cited by: §2.
-  (2018) Lq-nets: learned quantization for highly accurate and compact deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 365–382. Cited by: Table 3, §4.
-  (2017) Balanced quantization: an effective and efficient approach to quantized neural networks. Journal of Computer Science and Technology 32 (4), pp. 667–682. Cited by: §1.
-  (2016) Dorefa-net: training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160. Cited by: §1, §2, §4.1.