Introduction
Deep convolutional neural networks (DCNN), with its recent progress, has considerably changed the landscape of computer vision
[Krizhevsky, Sutskever, and Hinton2012] and many other fields.To achieve close to stateoftheart performance, a DCNN usually has a lot of parameters and high computational complexity, which may easily overwhelm resource capability of embedded devices. Substantial research efforts have been invested in speeding up DCNNs on both generalpurpose [Vanhoucke, Senior, and Mao2011, Gong et al.2014, Han et al.2015] and specialized computer hardware [Farabet et al.2009, Farabet et al.2011, Pham et al.2012, Chen et al.2014b, Chen et al.2014c, Zhang et al.2015a].
network  VOC12  Cityscapes  speedup 

32bit FCN  69.8%  62.1%  1x 
2bit BFCN  67.0%  60.3%  4.1x 
12 BFCN  62.8%  57.4%  7.8x 
Recent progress in using low bitwidth networks has considerably reduced parameter storage size and computation burden by using 1bit weight and low bitwidth activations. In particular, in BNN [Kim and Smaragdis2016]
and XNORnet
[Rastegari et al.2016], during the forward pass the most computationally expensive convolutions can be done by combining xnor and popcount operations, thanks to the following equivalence when and are bit vectors:Specifically, an FPGA implementation of neural network can take more benefit from low bitwidth computation, because the complexity of a multiplier is proportional to the square of bitwidths.
However, most of previous researches on low bitwidth networks have been focused on classification networks. In this paper, we are concerned with fully convolutional networks (FCN), which can be thought of as performing pixelwise classification of the input images and have applications in tasks like semantic segmentation [Long, Shelhamer, and Darrell2015]. Techniques developed in this paper can also be applied to other variants like RPN [Ren et al.2015], FCLN [Johnson, Karpathy, and FeiFei2015] and Densebox [Huang et al.2015]. Compared to a typical classification network, the following properties of FCN make it a better candidate to apply low bitwidth quantizations.

An FCN typically has large feature maps, and some of them may need to be stored for later combination, which pushes up its peak memory usage. As BFCN uses low bitwidth feature maps, the peak memory usage is significantly reduced.

An FCN usually accepts large input image and taps into a powerful classification network like VGGNet [Simonyan and Zisserman2014] or ResNet [He et al.2015] to boost performance. The acceleration offered by exploiting bitconvolution kernel, together with memory savings, would allow BFCN to be run on devices with limited computation resources.
Considering the method of training a low bitwdith network is still under exploration, it remains a challenge to find a way to train a BFCN efficiently as well.
Our paper makes the following contributions:

We propose BFCN, an FCN that has low bitwidth weights and activations, which is an extension to the combination of methods from Binarized Neural Network
[Courbariaux, Bengio, and David2014], XNORnet [Rastegari et al.2016] and DoReFanet [Zhou et al.2016]. 
We replace the convolutional filter in reconstruction with residual blocks to better suit the need of low bitwidth network. We also propose a novel bitwidth decay method to train BFCN with better performance. In our experiment, 2bit BFCN with residual reconstruction and linear bitwidth decay achieves a 67.0% mean intersectionoverunion score, which is 7.4% better than the vanilla variant.

Based on an ImageNet pretrained ResNet50 with bounded weights and activations, we train a semantic segmentation network with 2bit weights and activations except for the first layer, and achieves a mean IoU score of 67.0% on PASCAL VOC 2012
[Everingham et al.2015] and 60.3% on Cityscapes [Cordts et al.2016], both on validation set as shown in 1. For comparison, the baseline fullprecision model is 69.8% and 62.1% respectively. Our network can run at 5x speed on CPU compared to fullprecision, and can be implemented on FPGA with only few percents resource consumption.
Related Work
Semantic segmentation helps computer to understand the structure of images, and usually serves as a basis of other computer vision applications. Recent stateoftheart networks for semantic segmentation are mostly fully convolutional networks [Long, Shelhamer, and Darrell2015] and adopt the architecture of encoderdecoder with multistage refinement [Badrinarayanan, Handa, and Cipolla2015]. In order to achieve best performance, powerful classification models are often embedded as part of the FCNs, which pushes up computational complexity together with large decoders.
To further refine the results from neural networks, CRFs are widely used in postprocessing to improve local predictions [Chen et al.2014a] by reconstructing boundaries more accurately. Since CRF can be integrated with most methods as postprocessing step, which contributes little to our main topic, it will not be discussed in this paper.
Recent success of residual network has shown that very deep networks can be trained efficiently and performs better than any other previous network. There also exists successful attempt [Wu, Shen, and Hengel2016] to combine FCN and ResNet, which achieves considerable improvement in semantic segmentation.
To utilize sceneparsing network in lowlatency or realtime application, the computational complexity need to be significantly reduced. Some methods [Paszke et al.2016, Kim et al.2016] are proposed to reduce demand of computation resources of FCN by simplifying or redesigning the architecture of network.
We also note that our low bitwidth method can be integrated with almost all other speedup methods and achieves even further acceleration. For example, lowrank approaches [Zhang et al.2015b] is orthogonal to our approach and may be integrated to BFCN.
Method
In this section we first introduce the design of our bit fully convolutional network, and then propose our method for training a BFCN.
Network design
A standard approach to perform semantic segmentation includes a feature extractor to produce feature maps from input image, and convolutions with upsampling operations to predict perpixel labels from those feature maps. We use ResNet as feature extractor and adopt the multiresolution reconstruction structure from Laplacian Reconstruction and Refinement [Ghiasi and Fowlkes2016] to perform perpixel classification over feature maps in different scales (see Figure 1).
However, while the network works well in fullprecision, we observe a great loss in accuracy while converting it to low bitwidth, indicating that this architecture is not suitable for low bitwidth network. To address this issue, we evaluate different variations of BFCN, so as to figure out the cause of performance degeneration. As shown in Table 2, low bitwidth network with single convolution in reconstruction structure suffer great performance degeneration. We also discover that adding more channels in reconstruction filter helps improve performance considerably, indicating a low bitwidth convolution is not enough to extract spatial context from feature maps. In short, we need a more low bitwidth friendly architecture in reconstruction to eliminate the bottleneck.
model  mean IoU  Ops in reconstruction 

32bit model  66.4%  3.7 GOps 
baseline  59.6%  3.7 GOps 
2x filter channel  63.3%  7.8 GOps 
residual block  67.0%  6.9 GOps 
Intuitively, we may add more channels to the filter in reconstruction. But it also pushes up computational complexity a lot. Fortunately, ResNet [He et al.2015] allows us to go deeper instead of wider. It has been shown that a deeper residual block can often performs better than a wider convolution. Therefore, we address the issue by replacing the linear convolution with residual blocks. As shown in Table 2, our residual block variant even outperforms the original fullprecision network.
In our approach, residual reconstruction structure can not only achieve better performance with similar complexity to a wide convolution, but also accelerate training by reduce the length of shortest path in reconstruction.
Bitwidth allocation
It is important to decide how many bits to allocate for weights and feature maps, because bitwidth has a crucial impact on both performance and computational complexity of a network. Since our goal is to speedup semantic segmentation networks without losing much performance, we need to allocate bitwidth carefully and wisely.
First we note it has been observed [Gupta et al.2015] that 8bit fixed point quantization is enough for a network to achieve almost the same performance as 32bit floating point counterpart. Therefore, we focus our attention to bitwidths less than eight, which can provide us with further acceleration.
In order to extend bitconvolution kernel to bit weights and bit feature maps, we notice that:
where , represent the ith bit of and . Therefore, it is pretty straightforward to compute the dot product using bitconvolution kernels for bit weights and bit feature maps. It shows that the complexity of bitwidth allocation, which is our primary goal to optimize, is proportional to the product of bitwidths allocated to weights and activations. Specifically, bitwidth allocation becomes vital on FPGA implementation since it is the direct restriction of network size.
With fixed product of bitwidths, we still need to allocate bits between weights and activations. Intuitively, we would allocate bits equally as it keeps a balance between weights and activations, and error analysis confirms this intuition.
We first note that the error of a number introduced by bit quantization is
. As the errors are accumulated mainly by multiplication in convolution, it can be estimated as follow:
(1) 
When is constant, we have the following inequality:
(2) 
The equality holds iff , thus a balanced bitwidth allocation is needed so as to minimize errors.
For the first layer, since the input image is 8bit, we also fix bitwidth of weights to 8. The bitwidth of activations is still the same as other layers.
Route to low bitwidth
initialization  mean IoU 

low bitwidth ResNet  63.5% 
32bit FCN  65.7% 
8bit BFCN  65.8% 
As shown in Figure 2, there are two ways to adapt the procedure of training a fullprecision fully convolutional network to produce BFCN, denote as P1 and P2.
The only difference between P1 and P2 is the initialization. P1 uses fullprecision FCN as initialization while P2 uses low bitwidth classification network. Here fullprecision FCN serves as a intermediate stage in the procedure of training.
We evaluate the two routes and find the former one performs significantly better as the mean IoU scores indicate in Table 3. We then add one more intermediate stage to the procedure, the 8bit BFCN, and achieve a slightly better result. We conjecture that utilizing intermediate network helps to preserve more information in the process of converting to low bitwidth.
Bitwidth decay
We notice that cutting off bitwidth directly from fullprecision to very low bitwidth will lead to significant performance drop. To support this observation, we perform a simple experiment by training a 2bit network initialized by a pretrained network of different number of bits. The training process (Figure 3) shows that networks initialized from lower bitwidth converge faster.
This phenomenon can be explained by looking at the errors in quantization. Obviously, with higher original precision, a quantization step introduced larger error, and as as result the model benefit less from the initialization. However, introducing intermediate stages can help resolve it since networks with closer bitwidths tend to be more similar, hence more noisetolerant when cutting off bitwidth.
Our experiments show that BFCN can not recover from the quantization loss very well, if directly initialized from fullprecision models. To extend the idea of utilizing intermediate models during training low bitwidth network, we add more intermediate steps to train BFCN. We propose a method called bitwidth decay, which cuts off bitwidth stepbystep to avoid the overwhelming quantization error caused by large numeric precision drop.
We detail the procedure of bitwidth decay method as follow:

Pretrain a fullprecision network .

Quantize to produce in 8bit, which has been proved to be lossless, and finetune until its convergence.

Initialize with .

Decrease bitwidth of , and finetune for enough iterations.

Repeat step 4 until desired bitwidth is reached.
In this way, we can reduce the unrecoverable loss of quantization and the adverse impact of quantization can be mostly eliminated.
Experiments
In this section, we first describe the datasets we evaluate on and the experiment setup, then demonstrate the results of our method. Note that we conduct most of our experiments in our inhouse machine learning system.
method 
mean 
aero 
bike 
bird 
boat 
bottle 
bus 
car 
cat 
chair 
cow 
table 
dog 
horse 
motor 
person 
plant 
sheep 
sofa 
train 
tv 

32bit FCN  69.8  82.4  38.9  82.7  64.3  66.4  86.9  83.3  86.5  31.5  73.0  48.9  78.6  65.4  77.3  81.0  55.7  79.3  41.3  77.8  65.1 
4bit BFCN  68.6  82.3  39.1  79.4  67.4  66.5  85.9  79.6  84.9  29.9  69.3  50.1  75.9  63.2  74.9  81.1  55.6  82.3  37.5  78.0  66.2 
2bit BFCN  67.0  80.8  39.7  75.4  59.0  63.2  85.2  79.5  83.6  29.7  71.3  44.2  75.6  63.8  73.1  79.7  48.5  79.6  43.4  74.4  65.4 
Datasets
We benchmarked the performance of our BFCN on PASCAL VOC 2012 and Cityscapes, two popular datasets for semantic segmentation.
The PASCAL VOC 2012 dataset on semantic segmentation consists of 1464 labelled images for training, and 1449 for validation. There are 20 categories to be predicted, including aeroplane, bus, chair, sofa, etc. All images in the dataset are not larger than 500x500. Following the convention of literature [Long, Shelhamer, and Darrell2015, Wu, Shen, and Hengel2016], we use the augmented dataset from [Hariharan et al.2011], which gives us 10582 images for training in total. We also utilized reflection, resizing and random crop to augment the training data.
The Cityscapes dataset consists of 2975 street photos with fine annotation for training and 500 for validation. There are 19 classes of 7 categories in total. All images are in resolution of 2048x1536. In our experiment, the input of BFCN is randomcropped to 1536x768 due to GPU memory restriction, while validation is performed in its original size. We train our models with fineannotated images only.
For performance evaluation, we report the mean classwise intersectionoverunion score (mean IoU), which is the mean of IoU scores among classes.
Experiment Setup
All experiments are initialized from a ImageNet pretrained ResNet50 with bounded activations and weights. We then use stochastic gradient descend with momentum of 0.9 to finetune the BFCN on semantic segmentation dataset.
Since the prediction on higher resolution feature maps in laplacian reconstruction and refinement structure depends on the prediction on lower resolutions, we use stagewise losses to train the network. At first, we only define loss on 32x upsampling branch and finetune the network until convergence. Then losses of 16x, 8x and 4x upsampling branches are added one by one.
In order to overcome the huge imbalance of classes in Cityscapes dataset, we utilize a class weighing scheme introduced by ENet, which is defined as . We choose to bound class weights in .
Results of different bitwidth allocations
bitwidth (W / A)  mean IoU  Complexity 

32 / 32  69.8%   
8 / 8  69.8%  64 
4 / 4  68.6%  16 
3 / 3  67.4%  9 
2 / 2  65.7%  4 
1 / 4  64.4%  4 
4 / 1  diverge  4 
1 / 2  62.8%  2 
First we evaluate the impact of different bitwidth allocations on PASCAL VOC 2012 dataset (see Table 5).
We observe the performance of network degenerates while bitwidth is decreasing, which correspond to our intuition. While 88 model performs exactly the same as the fullprecision model, decreasing bitwidth from 44 to 22 continuously incurs degeneration in performance. The performance degeneration is at first minor compared to bitwidth saving, but suddenly becomes nonnegligible around 44. We also discover that allocating different bitwidths to weights and activations harms performance compared to equallyallocated model with the same complexity.
From the results we conclude that 44 and 22 are favorable choices in different scenes. The 44 model can offer comparable performance with fullprecision model but with considerable 75% resource savings compared to 88 on FPGA. In a more resourcelimited situation, the 22 model can still offer good performance with only 6.25% hardware complexity of 88 model.
Results of bitwidth decay
decay rate  mean IoU 

1  67.0% 
2  66.8% 
3  66.1% 
no decay  65.8% 
We then show how bitwidth decay affects performance of networks on PASCAL VOC 2012.
It can be seen from Table 6 that bitwidth decay does help to achieve a better performance compared to directly cutting off bitwidth.
Besides, we evaluate the impact of ”decay rate”, which is the number of bits in a step. For a decay rate of , we have and after steps of decay, where is the initial bitwidth. The results of different decay rates are also presented in Table 6.
We discover with decay rate less than 2 we can achieve almost the same performance, but increasing it to 3 leads to a sudden drop in performance. It indicates network with 3 less bits starts diverging from its high bitwidth couterpart.
method 
mean 
road 
sidewalk 
building 
wall 
fence 
pole 
light 
sign 
veg 
terrain 
sky 
person 
rider 
car 
truck 
bus 
train 
moto 
bicycle 

32bit FCN  62.1  95.8  73.5  88.2  31.4  38.2  52.6  49.6  65.8  89.8  52.7  90.1  72.8  47.6  89.9  40.8  57.3  37.2  38.2  69.1 
2bit BFCN  60.3  95.3  71.2  87.6  25.9  36.2  51.8  49.0  63.3  89.1  51.7  89.9  71.4  44.5  89.9  40.1  53.8  32.4  33.8  67.5 
12 BFCN  57.4  94.4  70.1  86.5  22.7  33.9  49.9  44.3  62.2  87.9  44.9  89.3  69.5  40.0  88.1  35.1  51.8  21.0  32.6  65.9 
Analysis of classwise results
As can be observed that most performance degeneration occur in classes which are more difficult to classify. In PASCAL VOC 2012, we observe that on finegrained classes like car and bus, cat and dog, BFCN is less powerful than its 32bit counterpart, however on classes like sofa and bike, 2bit BFCN even outperforms the fullprecision network.
It can be seen more clearly on Cityscapes dataset: classes with low mean IoU scores in fullprecision network become worse after quantization (like wall and train), while those largescale, frequent classes such as sky and car remain in nearly the same accuracy.
The observation correspond to our intuition that a low bitwidth quantized network is usually less powerful and thus harder to train on difficult tasks. It also suggest that we may use class balancing or bootstrapping to improve performance in these cases.
Analysis of runtime performance
We then analyze the runtime performance of BFCN on Tegra K1’s CPU. We have implemented a custom runtime on arm, and all our results on CPU are measured directly in the runtime.
We note that 1 single precision operation is equivalent to 1024 bitOps on FPGA in terms of resource consumption, and roughly 18 bitOps on CPU according to the inference speed measured in our custom runtime. Thus, a network with bit weights and bit activations is expected to be faster than its 32bit counterpart ignoring the overheads.
As shown in Table 8, our 12 BFCN can run 7.8x faster than fullprecision network with only 1/32 storage size.
method  run time  parameter size 

32bit FCN  9681 ms  137.7 MB 
12 BFCN  1237 ms  4.3 MB 
Discussion




We present some example outputs on PASCAL VOC 2012 in Figure 4. From predictions we can see that BFCNs perform well on easy tasks. But on difficult tasks, which mostly consist of small objects or rare classes like bottle and sofa, BFCN will fail and have worse boundary performance. It also seems that BFCN has difficulties in reconstructing fine structures of the input image. However, low bitwidth networks seldom misclassify the whole object, which effectively allow them to be used in real applications.
Conclusion and Future Work
In this paper, we propose and study methods for training bit fully convolutional network, which uses low bitwidth weights and activations to accelerate inference speed and reduce memory footprint. We also propose a novel method to train a low bitwidth network, which decreases bitwidth step by step to reduce performance loss resulting from quantization. As a result, we are able to train efficient low bitwidth sceneparsing networks without losing much performance. The low bitwidth networks are especially friendly to hardware implementations like FPGA as low bitwidth multipliers usually require orders of magnitude less resources.
As future work, a better baseline model can be used and CRF as well as other techniques can be integrated into BFCN for even better performance. We also note that our methods of designing and training low bitwidth network can also be applied to other related tasks such as object detection and instance segmentation.
References
 [Badrinarayanan, Handa, and Cipolla2015] Badrinarayanan, V.; Handa, A.; and Cipolla, R. 2015. Segnet: A deep convolutional encoderdecoder architecture for robust semantic pixelwise labelling. arXiv preprint arXiv:1505.07293.
 [Chen et al.2014a] Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; and Yuille, A. L. 2014a. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062.
 [Chen et al.2014b] Chen, T.; Du, Z.; Sun, N.; Wang, J.; Wu, C.; Chen, Y.; and Temam, O. 2014b. Diannao: A smallfootprint highthroughput accelerator for ubiquitous machinelearning. In ACM Sigplan Notices, volume 49, 269–284. ACM.
 [Chen et al.2014c] Chen, Y.; Luo, T.; Liu, S.; Zhang, S.; He, L.; Wang, J.; Li, L.; Chen, T.; Xu, Z.; Sun, N.; et al. 2014c. Dadiannao: A machinelearning supercomputer. In Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on, 609–622. IEEE.

[Cordts et al.2016]
Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.;
Franke, U.; Roth, S.; and Schiele, B.
2016.
The cityscapes dataset for semantic urban scene understanding.
InProc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
.  [Courbariaux, Bengio, and David2014] Courbariaux, M.; Bengio, Y.; and David, J.P. 2014. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024.
 [Everingham et al.2015] Everingham, M.; Eslami, S. M. A.; Van Gool, L.; Williams, C. K. I.; Winn, J.; and Zisserman, A. 2015. The pascal visual object classes challenge: A retrospective. International Journal of Computer Vision 111(1):98–136.
 [Farabet et al.2009] Farabet, C.; Poulet, C.; Han, J. Y.; and LeCun, Y. 2009. Cnp: An fpgabased processor for convolutional networks. In 2009 International Conference on Field Programmable Logic and Applications, 32–37. IEEE.
 [Farabet et al.2011] Farabet, C.; LeCun, Y.; Kavukcuoglu, K.; Culurciello, E.; Martini, B.; Akselrod, P.; and Talay, S. 2011. Largescale fpgabased convolutional networks. Machine Learning on Very Large Data Sets 1.
 [Ghiasi and Fowlkes2016] Ghiasi, G., and Fowlkes, C. C. 2016. Laplacian reconstruction and refinement for semantic segmentation. CoRR abs/1605.02264.
 [Gong et al.2014] Gong, Y.; Liu, L.; Yang, M.; and Bourdev, L. 2014. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115.
 [Gupta et al.2015] Gupta, S.; Agrawal, A.; Gopalakrishnan, K.; and Narayanan, P. 2015. Deep learning with limited numerical precision. arXiv preprint arXiv:1502.02551.
 [Han et al.2015] Han, S.; Pool, J.; Tran, J.; and Dally, W. 2015. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, 1135–1143.
 [Hariharan et al.2011] Hariharan, B.; Arbeláez, P.; Bourdev, L.; Maji, S.; and Malik, J. 2011. Semantic contours from inverse detectors. In 2011 International Conference on Computer Vision, 991–998. IEEE.
 [He et al.2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385.
 [Huang et al.2015] Huang, L.; Yang, Y.; Deng, Y.; and Yu, Y. 2015. Densebox: Unifying landmark localization with end to end object detection. arXiv preprint arXiv:1509.04874.
 [Johnson, Karpathy, and FeiFei2015] Johnson, J.; Karpathy, A.; and FeiFei, L. 2015. Densecap: Fully convolutional localization networks for dense captioning. arXiv preprint arXiv:1511.07571.
 [Kim and Smaragdis2016] Kim, M., and Smaragdis, P. 2016. Bitwise neural networks. arXiv preprint arXiv:1601.06071.
 [Kim et al.2016] Kim, K.H.; Cheon, Y.; Hong, S.; Roh, B.; and Park, M. 2016. Pvanet: Deep but lightweight neural networks for realtime object detection. arXiv preprint arXiv:1608.08021.
 [Krizhevsky, Sutskever, and Hinton2012] Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097–1105.
 [Long, Shelhamer, and Darrell2015] Long, J.; Shelhamer, E.; and Darrell, T. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3431–3440.
 [Paszke et al.2016] Paszke, A.; Chaurasia, A.; Kim, S.; and Culurciello, E. 2016. Enet: A deep neural network architecture for realtime semantic segmentation. arXiv preprint arXiv:1606.02147.
 [Pham et al.2012] Pham, P.H.; Jelaca, D.; Farabet, C.; Martini, B.; LeCun, Y.; and Culurciello, E. 2012. Neuflow: Dataflow vision processing systemonachip. In Circuits and Systems (MWSCAS), 2012 IEEE 55th International Midwest Symposium on, 1044–1047. IEEE.
 [Rastegari et al.2016] Rastegari, M.; Ordonez, V.; Redmon, J.; and Farhadi, A. 2016. Xnornet: Imagenet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279.
 [Ren et al.2015] Ren, S.; He, K.; Girshick, R.; and Sun, J. 2015. Faster rcnn: Towards realtime object detection with region proposal networks. In Advances in neural information processing systems, 91–99.
 [Simonyan and Zisserman2014] Simonyan, K., and Zisserman, A. 2014. Very deep convolutional networks for largescale image recognition. CoRR abs/1409.1556.

[Vanhoucke, Senior, and
Mao2011]
Vanhoucke, V.; Senior, A.; and Mao, M. Z.
2011.
Improving the speed of neural networks on cpus.
In
Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop
, volume 1.  [Wu, Shen, and Hengel2016] Wu, Z.; Shen, C.; and Hengel, A. v. d. 2016. Highperformance semantic segmentation using very deep fully convolutional networks. arXiv preprint arXiv:1604.04339.
 [Zhang et al.2015a] Zhang, C.; Li, P.; Sun, G.; Guan, Y.; Xiao, B.; and Cong, J. 2015a. Optimizing fpgabased accelerator design for deep convolutional neural networks. In Proceedings of the 2015 ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays, 161–170. ACM.
 [Zhang et al.2015b] Zhang, X.; Zou, J.; He, K.; and Sun, J. 2015b. Accelerating very deep convolutional networks for classification and detection.
 [Zhou et al.2016] Zhou, S.; Wu, Y.; Ni, Z.; Zhou, X.; Wen, H.; and Zou, Y. 2016. Dorefanet: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.