Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free Inference

06/11/2019
by   Michele Covell, et al.
0

In this work, we propose to quantize all parts of standard classification networks and replace the activation-weight--multiply step with a simple table-based lookup. This approach results in networks that are free of floating-point operations and free of multiplications, suitable for direct FPGA and ASIC implementations. It also provides us with two simple measures of per-layer and network-wide compactness as well as insight into the distribution characteristics of activationoutput and weight values. We run controlled studies across different quantization schemes, both fixed and adaptive and, within the set of adaptive approaches, both parametric and model-free. We implement our approach to quantization with minimal, localized changes to the training process, allowing us to benefit from advances in training continuous-valued network architectures. We apply our approach successfully to AlexNet, ResNet, and MobileNet. We show results that are within 1.6 reported, non-quantized performance on MobileNet using only 40 entries in our table. This performance gap narrows to zero when we allow tables with 320 entries. Our results give the best accuracies among multiply-free networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/13/2021

Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

In low-latency or mobile applications, lower computation complexity, low...
research
11/20/2015

Resiliency of Deep Neural Networks under Quantization

The complexity of deep neural network algorithms for hardware implementa...
research
06/11/2019

Data-Free Quantization through Weight Equalization and Bias Correction

We introduce a data-free quantization method for deep neural networks th...
research
05/31/2020

Quantized Neural Networks: Characterization and Holistic Optimization

Quantized deep neural networks (QDNNs) are necessary for low-power, high...
research
07/13/2017

Model compression as constrained optimization, with application to neural nets. Part II: quantization

We consider the problem of deep neural net compression by quantization: ...
research
10/12/2022

Clustering Embedding Tables, Without First Learning Them

To work with categorical features, machine learning systems employ embed...
research
12/11/2018

Proximal Mean-field for Neural Network Quantization

Compressing large neural networks by quantizing the parameters, while ma...

Please sign up or login with your details

Forgot password? Click here to reset