Learnable Companding Quantization for Accurate Low-bit Neural Networks

03/12/2021
by   Kohei Yamamoto, et al.
8

Quantizing deep neural networks is an effective method for reducing memory consumption and improving inference speed, and is thus useful for implementation in resource-constrained devices. However, it is still hard for extremely low-bit models to achieve accuracy comparable with that of full-precision models. To address this issue, we propose learnable companding quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and 4-bit models. LCQ jointly optimizes model weights and learnable companding functions that can flexibly and non-uniformly control the quantization levels of weights and activations. We also present a new weight normalization technique that allows more stable training for quantization. Experimental results show that LCQ outperforms conventional state-of-the-art methods and narrows the gap between quantized and full-precision models for image classification and object detection tasks. Notably, the 2-bit ResNet-50 model on ImageNet achieves top-1 accuracy of 75.1 allowing LCQ to further exploit the potential of non-uniform quantization.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/28/2019

Additive Powers-of-Two Quantization: A Non-uniform Discretization for Neural Networks

We proposed Additive Powers-of-Two (APoT) quantization, an efficient non...
research
04/01/2021

Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer

Quantizing weights and activations of deep neural networks is essential ...
research
10/07/2019

Bit Efficient Quantization for Deep Neural Networks

Quantization for deep neural networks have afforded models for edge devi...
research
05/04/2021

One Model for All Quantization: A Quantized Network Supporting Hot-Swap Bit-Width Adjustment

As an effective technique to achieve the implementation of deep neural n...
research
03/22/2021

n-hot: Efficient bit-level sparsity for powers-of-two neural network quantization

Powers-of-two (PoT) quantization reduces the number of bit operations of...
research
12/26/2020

Hybrid and Non-Uniform quantization methods using retro synthesis data for efficient inference

Existing quantization aware training methods attempt to compensate for t...
research
06/13/2023

SqueezeLLM: Dense-and-Sparse Quantization

Generative Large Language Models (LLMs) have demonstrated remarkable res...

Please sign up or login with your details

Forgot password? Click here to reset