To enable deep learning capability on IoT devices, there are two major components to be designed: the software, e.g., DNN models featuring parameters through learning for specific applications, and the hardware, such as DNN accelerators running on GPUs, FPGAs, or ASICs. Both of them contribute to the overall QoR and QoS without clear distinctions, so there is an urgent need of DNN and accelerator co-design.
1.1 Drawbacks of Independent Design Approaches
Typically, DNNs and their accelerators are designed and optimized separately for IoT applications in an iterative manner. DNNs are first designed with more concentrations on QoR. Such DNNs can be excessively complicated for the targeted IoT devices, which must be compressed using quantization, network pruning, or sparsification (Wang et al., 2018; Han et al., 2017) before implementing on hardware, and then be retrained to maintain inference accuracy. Since no hardware constraints are captured during DNN design, this design methodology can only expect hardware accelerators to deliver good QoS through later optimizations on hardware prospects. On the other hand, the DNN accelerator design usually uses a consistent overall architecture (such as the recurrent (Aydonat et al., 2017; Zeng et al., 2018; Jouppi et al., 2017) or pipelined structure (Li et al., 2016; Zhang et al., 2018)) but various scale-down factors to meet different hardware constraints. When facing strict hardware constraints, scaling-down the accelerator is not always feasible as the shrinking resources can significantly slow down the DNN inference process and result in poor QoS. Design opportunities must turn to the algorithm side and ask for more compact DNN models.
2 Empirical observations
2.1 Challenging HW/SW Configurations
One of the most fundamental barriers blocking the DNN and accelerator design is the different sensitivities of DNN/accelerator configurations (e.g., DNN model size, hardware utilization features). It is hard to balance these configurations using separated DNN/accelerator design approach, since a negligible change in DNN models may cause huge differences in hardware accelerators and vice versa, resulting in difficult trade-off between QoR and QoS.
Observation 1: similar compression rate but different accuracy. When designing DNNs for IoT applications, it is inevitable to perform model compression. Although the overall QoS may be the same for a DNN with similar compression rates, the compression of different DNN components may cause great differences in QoR. As shown in Fig. 1 (a), the accuracy trends vary significantly for quantizing parameters and intermediate feature maps (FMs). In this figure, the coordinates of the bubble center represent accuracy and model compression rate, while the area shows data size in megabyte (MB). We scale-up the bubble size of FM for better graphic effect. By compressing the model from full-precision (float32) to 8-bit, 4-bit fixed point, ternary and binary representations, we reduce 22X parameter size (237.9MB10.8MB) and 16X FM size (15.7MB0.98MB), respectively. Results show that the inference accuracy is more sensitive to the precision of FM (9.8% accuracy drop with 16X compression) compared to the parameters (4.8% accuracy drop with 22X compression). Challenges also come from the difficulty of DNN training. As shown in Fig. 1 (b), the accuracy growth of compressed model is quite unstable compared to the original full-precision model. It requires more efforts to design the training process (e.g., fine-tuning the training set-up or iteratively modifying the DNN compression rate) and more powerful machines (e.g., computer cluster for faster training (Li et al., 2018)).
Observation 2: similar accuracy but different hardware resource utilization. DNN models with similar QoR may also result in greatly different QoS because of different hardware resource usage. Taking the implementation of a DNN accelerator on FPGA as an example, a single bit difference in data representation may result in considerable impacts on hardware resource utilization. Fig. 2 (a) shows BRAM (on-chip memory in FPGA) usage under different image resize factors with 1216-bit data precision. By reducing the resize factor from 1.00 to 0.78, we can maintain nearly the same DNN accuracy (<1.0% drop), but can save half memory when the resize factor is smaller than 0.9. Similarly, Fig. 2 (b) indicates that different quantization combinations of DNN feature maps and weights can result in great diverse DSP utilization. Taking the 6-bit feature map as an example, the DSP usage reduces from 128 to 64 when weights are changed from 15-bit to 14-bit. The reason behind is the limited bit-width support of each DSP to perform a two-input multiplication. If the bit-width of two inputs exceed a certain value, more DSPs are concatenated to handle one multiplication, which can easily double the resource utilization. These observations imply that the configuration of hardware/software can cause great challenges of delivering desired QoR and QoS on IoT devices.
2.2 Confusing QoR Upper-Bounds for Given Tasks
When deploying DNNs on IoT devices, it is common to first find a DNN with desired QoR upper-bound for the targeted application, and then to prune the DNN to make up for the lost QoS on hardware. This solution assumes that complicated DNNs with more parameters always deliver higher QoR than simple DNNs with less parameters. However, it is not always true. By examining a UAV-based object detection task (DAC, 2018), we observe an abnormal trend regarding model size and QoR upper-bound (Table 1), where DNNs with more parameters fail to deliver higher accuracy after adequate training. This implies that the current separated DNN/accelerator design may only reach suboptimal solutions, and requires more time and efforts for iterative refining before delivering prefect QoR and QoS.
|Backbone||Para. Size (MB)||IoU|
|ResNet-18 (He et al., 2016)||85||61%|
|ResNet-32 (He et al., 2016)||162||26%|
|ResNet-50 (He et al., 2016)||179||32%|
|VGG-16 (Simonyan et al., 2014)||56||25%|
3 The Proposed Bi-Directional Co-Design
Motivated by the discussed observations, we propose a bi-directional co-design methodology with a bottom-up hardware-oriented DNN design, and a top-down accelerator design considering DNN-specific characteristics. Both DNNs and accelerators are designed simultaneously to pursue the best trade-off between QoS and QoR. The overall flow of the proposed co-design is shown in Fig. 3. The inputs of this flow include the targeted QoS, QoR, and the hardware resource constraints; the outputs include the generated DNN model and its corresponding accelerator design. We break down the whole flow into three steps:
Step1: Bundle construction and QoS evaluation.
We randomly select DNN components from the layer pool and construct bundles (as basic building blocks of generated DNNs) with different layer combinations. Each of the bundle is evaluated by analytical models to capture the hardware characteristics (e.g., latency, computation and memory demands, resource utilization), which allows QoS estimation in the early stage during DNN exploration.
Step2: QoR- and QoS-based bundle selection. To select the most promising bundles, we first evaluate the QoR potential of each bundle by replicating such bundle
time to construct a prototype DNN. All prototype DNNs are fast trained (20 epochs) directly on the targeted dataset for accuracy results. Based on the QoS estimation instep1, we group prototype DNNs with similar QoS to the input targets and select the top- bundle candidates of each group.
Step3: Hardware-aware DNN exploration. By stacking the selected bundle, we start exploring DNNs with a bottom-up approach under given QoS and QoR constraints by using stochastic coordinate descent (SCD). DNNs output from SCD are precisely evaluated regarding their QoS and fed back to SCD for DNN model update. The generated DNNs that meet QoS targets are output for training and fine-tuning to have improved QoR.
We propose a DNN accelerator which provides a tile-based pipelined architecture for efficient implementation of DNN applications with maximum resource sharing strategy. It includes a folded structure to compute DNN bundles sequentially by reusing the same hardware computing components for resource saving when targeting compact IoT devices. To ensure better QoS, it also uses an unfolded structure for computing operations (partitioned by tiles) inside bundles in a pipelined manner. With the combination of folded and unfolded structure, the proposed architecture can acquire advantages from both recurrent and pipelined structure.
|A (W16, F8)||B (W16, F16)||C (W11, F8)|
|input (3160360 color image)|
|Back-end for bounding box regression|
|The proposed DNN-A||59.3%||29.7||12.38 image/watt|
|The proposed DNN-B||61.2%||22.7||9.46 image/watt|
|The proposed DNN-C||68.6%||17.4||6.96 image/watt|
|Modified SSD (FPGA)||62.4%||12.0||2.86 image/watt|
|Modified Yolo (GPU)||69.8%||24.6||1.95 image/watt|
4 Results and Conclusions
We demonstrate the proposed bi-directional co-design on a real-life object detection task in DAC’18 System Design Contest and generate three DNNs (A, B, and C in Table 2) and corresponding accelerators on Pynq-Z1 FPGA for different QoS-QoR combinations. The proposed co-design flow first identifies that the bundle with DW-Conv3, PW-Conv1, and max-pooling layers is the most promising building template for the target hardware device and application. Based on this bundle, the co-design explores three DNN configurations with different quantization schemes to satisfy the QoR demands, respectively. As shown in Table 3, we can deliver the best FPS (29.7) and efficiency (12.38 image/watt) using the same FPGA as the FPGA champion design. Among them, the proposed DNN-C outperforms the FPGA winning design in all aspects with 6.2% higher IoU, 1.45X higher FPS, and 2.4X higher efficiency. Comparing to the GPU winning design, the DNN-C design can deliver comparable accuracy but 3.6X higher efficiency.
This work was partly supported by the IBM-Illinois Center for Cognitive Computing System Research (CSR) – a research collaboration as part of IBM AI Horizons Network.
- DAC (2018) DAC System Design Contest. https://github.com/xyzxinyizhang/2018-DAC-System-Design-Contest, 2018.
- Aydonat et al. (2017) Aydonat, U., O’Connell, S., Capalija, D., Ling, A. C., and Chiu, G. R. An Opencl deep learning accelerator on Arria 10. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 55–64. ACM, 2017.
- Han et al. (2017) Han, S., Kang, J., Mao, H., Hu, Y., Li, X., Li, Y., Xie, D., Luo, H., Yao, S., Wang, Y., et al. Ese: Efficient speech recognition engine with sparse lstm on fpga. In Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 75–84. ACM, 2017.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In
Jouppi et al. (2017)
Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R.,
Bates, S., Bhatia, S., Boden, N., Borchers, A., et al.
In-datacenter performance analysis of a tensor processing unit.In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 1–12. IEEE, 2017.
Li et al. (2016)
Li, H., Fan, X., Jiao, L., Cao, W., Zhou, X., and Wang, L.
A high performance FPGA-based accelerator for large-scale convolutional neural networks.In 2016 26th International Conference on Field Programmable Logic and Applications (FPL), pp. 1–9. IEEE, 2016.
- Li et al. (2018) Li, Y., Yu, M., Li, S., Avestimehr, S., Kim, N. S., and Schwing, A. Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS’18), Montreal, Canada, December 2018.
- Simonyan et al. (2014) Simonyan, K. et al. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- Wang et al. (2018) Wang, J., Lou, Q., Zhang, X., Zhu, C., Lin, Y., and Chen, D. Design flow of accelerating hybrid extremely low bit-width neural network in embedded FPGA. In 2018 28th International Conference on Field Programmable Logic and Applications (FPL), pp. 163–1636. IEEE, 2018.
- Zeng et al. (2018) Zeng, H., Chen, R., Zhang, C., and Prasanna, V. A framework for generating high throughput cnn implementations on FPGAs. In Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 117–126. ACM, 2018.
- Zhang et al. (2018) Zhang, X., Wang, J., Zhu, C., Lin, Y., Xiong, J., Hwu, W.-m., and Chen, D. DNNBuilder: an automated tool for building high-performance DNN hardware accelerators for FPGAs. In Proceedings of the International Conference on Computer-Aided Design, pp. 56. ACM, 2018.