Neural Network Activation Quantization with Bitwise Information Bottlenecks

06/09/2020
by   Xichuan Zhou, et al.
0

Recent researches on information bottleneck shed new light on the continuous attempts to open the black box of neural signal encoding. Inspired by the problem of lossy signal compression for wireless communication, this paper presents a Bitwise Information Bottleneck approach for quantizing and encoding neural network activations. Based on the rate-distortion theory, the Bitwise Information Bottleneck attempts to determine the most significant bits in activation representation by assigning and approximating the sparse coefficient associated with each bit. Given the constraint of a limited average code rate, the information bottleneck minimizes the rate-distortion for optimal activation quantization in a flexible layer-by-layer manner. Experiments over ImageNet and other datasets show that, by minimizing the quantization rate-distortion of each layer, the neural network with information bottlenecks achieves the state-of-the-art accuracy with low-precision activation. Meanwhile, by reducing the code rate, the proposed method can improve the memory and computational efficiency by over six times compared with the deep neural network with standard single-precision representation. Codes will be available on GitHub when the paper is accepted <https://github.com/BitBottleneck/PublicCode>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2018

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

Although weight and activation quantization is an effective approach for...
research
07/27/2019

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

The field of video compression has developed some of the most sophistica...
research
01/12/2020

Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers

We consider the problem of learning a neural network classifier. Under t...
research
03/03/2023

Rotation Invariant Quantization for Model Compression

Post-training Neural Network (NN) model compression is an attractive app...
research
01/15/2023

RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs

In recent years, Convolutional Neural Networks (CNNs) have become the st...
research
01/07/2022

Auto-Weighted Layer Representation Based View Synthesis Distortion Estimation for 3-D Video Coding

Recently, various view synthesis distortion estimation models have been ...
research
04/08/2020

A Polynomial Neural Network with Controllable Precision and Human-Readable Topology for Prediction and System Identification

Although the success of artificial neural networks (ANNs), there is stil...

Please sign up or login with your details

Forgot password? Click here to reset