Automated flow for compressing convolution neural networks for efficient edge-computation with FPGA

12/18/2017
by   Farhan Shafiq, et al.
0

Deep convolutional neural networks (CNN) based solutions are the current state- of-the-art for computer vision tasks. Due to the large size of these models, they are typically run on clusters of CPUs or GPUs. However, power requirements and cost budgets can be a major hindrance in adoption of CNN for IoT applications. Recent research highlights that CNN contain significant redundancy in their structure and can be quantized to lower bit-width parameters and activations, while maintaining acceptable accuracy. Low bit-width and especially single bit-width (binary) CNN are particularly suitable for mobile applications based on FPGA implementation, due to the bitwise logic operations involved in binarized CNN. Moreover, the transition to lower bit-widths opens new avenues for performance optimizations and model improvement. In this paper, we present an automatic flow from trained TensorFlow models to FPGA system on chip implementation of binarized CNN. This flow involves quantization of model parameters and activations, generation of network and model in embedded-C, followed by automatic generation of the FPGA accelerator for binary convolutions. The automated flow is demonstrated through implementation of binarized "YOLOV2" on the low cost, low power Cyclone- V FPGA device. Experiments on object detection using binarized YOLOV2 demonstrate significant performance benefit in terms of model size and inference speed on FPGA as compared to CPU and mobile CPU platforms. Furthermore, the entire automated flow from trained models to FPGA synthesis can be completed within one hour.

READ FULL TEXT
research
07/21/2022

LPYOLO: Low Precision YOLO for Face Detection on FPGA

In recent years, number of edge computing devices and artificial intelli...
research
03/29/2018

B-DCGAN:Evaluation of Binarized DCGAN for FPGA

We are trying to implement deep neural networks in the edge computing en...
research
07/22/2022

HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation

Quantization for CNN has shown significant progress with the intention o...
research
12/28/2021

HiKonv: High Throughput Quantized Convolution With Novel Bit-wise Management and Computation

Quantization for Convolutional Neural Network (CNN) has shown significan...
research
09/29/2018

NICE: Noise Injection and Clamping Estimation for Neural Network Quantization

Convolutional Neural Networks (CNN) are very popular in many fields incl...
research
12/22/2020

FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations

Binary neural networks (BNNs) have 1-bit weights and activations. Such n...
research
09/24/2019

A System-Level Solution for Low-Power Object Detection

Object detection has made impressive progress in recent years with the h...

Please sign up or login with your details

Forgot password? Click here to reset