HiKonv: High Throughput Quantized Convolution With Novel Bit-wise Management and Computation

12/28/2021
by   Xinheng Liu, et al.
0

Quantization for Convolutional Neural Network (CNN) has shown significant progress with the intention of reducing the cost of computation and storage with low-bitwidth data inputs. There are, however, no systematic studies on how an existing full-bitwidth processing unit, such as CPUs and DSPs, can be better utilized to carry out significantly higher computation throughput for convolution under various quantized bitwidths. In this study, we propose HiKonv, a unified solution that maximizes the compute throughput of a given underlying processing unit to process low-bitwidth quantized data inputs through novel bit-wise parallel computation. We establish theoretical performance bounds using a full-bitwidth multiplier for highly parallelized low-bitwidth convolution, and demonstrate new breakthroughs for high-performance computing in this critical domain. For example, a single 32-bit processing unit can deliver 128 binarized convolution operations (multiplications and additions) under one CPU instruction, and a single 27x18 DSP core can deliver eight convolution operations with 4-bit inputs in one cycle. We demonstrate the effectiveness of HiKonv on CPU and FPGA for both convolutional layers or a complete DNN model. For a convolutional layer quantized to 4-bit, HiKonv achieves a 3.17x latency improvement over the baseline implementation using C++ on CPU. Compared to the DAC-SDC 2020 champion model for FPGA, HiKonv achieves a 2.37x throughput improvement and 2.61x DSP efficiency improvement, respectively.

READ FULL TEXT
research
07/22/2022

HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation

Quantization for CNN has shown significant progress with the intention o...
research
03/19/2020

LANCE: efficient low-precision quantized Winograd convolution for neural networks based on graphics processing units

Accelerating deep convolutional neural networks has become an active top...
research
04/06/2020

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Deployment of deep neural networks for applications that require very hi...
research
07/13/2018

CascadeCNN: Pushing the Performance Limits of Quantisation in Convolutional Neural Networks

This work presents CascadeCNN, an automated toolflow that pushes the qua...
research
05/22/2018

CascadeCNN: Pushing the performance limits of quantisation

This work presents CascadeCNN, an automated toolflow that pushes the qua...
research
12/18/2017

Automated flow for compressing convolution neural networks for efficient edge-computation with FPGA

Deep convolutional neural networks (CNN) based solutions are the current...
research
11/18/2021

QGTC: Accelerating Quantized Graph Neural Networks via GPU Tensor Core

Over the most recent years, quantized graph neural network (QGNN) attrac...

Please sign up or login with your details

Forgot password? Click here to reset