Quantized Neural Networks via -1, +1 Encoding Decomposition and Acceleration

06/18/2021
by   Qigong Sun, et al.
0

The training of deep neural networks (DNNs) always requires intensive resources for both computation and data storage. Thus, DNNs cannot be efficiently applied to mobile phones and embedded devices, which severely limits their applicability in industrial applications. To address this issue, we propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks, which can be efficiently implemented by bitwise operations (i.e., xnor and bitcount) to achieve model compression, computational acceleration, and resource saving. By using our method, users can achieve different encoding precisions arbitrarily according to their requirements and hardware resources. The proposed mechanism is highly suitable for the use of FPGA and ASIC in terms of data storage and computation, which provides a feasible idea for smart chips. We validate the effectiveness of our method on large-scale image classification (e.g., ImageNet), object detection, and semantic segmentation tasks. In particular, our method with low-bit encoding can still achieve almost the same performance as its high-bit counterparts.

READ FULL TEXT

page 1

page 10

page 11

research
05/31/2019

Multi-Precision Quantized Neural Networks via Encoding Decomposition of -1 and +1

The training of deep neural networks (DNNs) requires intensive resources...
research
12/04/2019

RTN: Reparameterized Ternary Network

To deploy deep neural networks on resource-limited devices, quantization...
research
07/19/2021

A New Clustering-Based Technique for the Acceleration of Deep Convolutional Networks

Deep learning and especially the use of Deep Neural Networks (DNNs) prov...
research
08/26/2016

Scalable Compression of Deep Neural Networks

Deep neural networks generally involve some layers with mil- lions of pa...
research
12/01/2022

Exploiting Kernel Compression on BNNs

Binary Neural Networks (BNNs) are showing tremendous success on realisti...
research
07/31/2017

Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform

Deep neural networks (DNNs) are used by different applications that are ...
research
04/05/2023

Adopting Two Supervisors for Efficient Use of Large-Scale Remote Deep Neural Networks

Recent decades have seen the rise of large-scale Deep Neural Networks (D...

Please sign up or login with your details

Forgot password? Click here to reset