Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks

10/18/2021
by   Yikai Wang, et al.
0

In the low-bit quantization field, training Binary Neural Networks (BNNs) is the extreme solution to ease the deployment of deep models on resource-constrained devices, having the lowest storage cost and significantly cheaper bit-wise operations compared to 32-bit floating-point counterparts. In this paper, we introduce Sub-bit Neural Networks (SNNs), a new type of binary quantization design tailored to compress and accelerate BNNs. SNNs are inspired by an empirical observation, showing that binary kernels learnt at convolutional layers of a BNN model are likely to be distributed over kernel subsets. As a result, unlike existing methods that binarize weights one by one, SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space. Specifically, our method includes a random sampling step generating layer-specific subsets of the kernel space, and a refinement step learning to adjust these subsets of binary kernels via optimization. Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs. For instance, on ImageNet, SNNs of ResNet-18/ResNet-34 with 0.56-bit weights achieve 3.13/3.33 times runtime speed-up and 1.8 times compression over conventional BNNs with moderate drops in recognition accuracy. Promising results are also obtained when applying SNNs to binarize both weights and activations. Our code is available at https://github.com/yikaiw/SNN.

READ FULL TEXT
research
04/04/2022

Soft Threshold Ternary Networks

Large neural networks are difficult to deploy on mobile devices because ...
research
03/25/2023

Compacting Binary Neural Networks by Sparse Kernel Selection

Binary Neural Network (BNN) represents convolution weights with 1-bit va...
research
02/10/2017

Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights

This paper presents incremental network quantization (INQ), a novel meth...
research
11/29/2019

Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization

Neural Network quantization, which aims to reduce bit-lengths of the net...
research
07/03/2021

Exact Backpropagation in Binary Weighted Networks with Group Weight Transformations

Quantization based model compression serves as high performing and fast ...
research
09/29/2018

NICE: Noise Injection and Clamping Estimation for Neural Network Quantization

Convolutional Neural Networks (CNN) are very popular in many fields incl...
research
04/03/2023

Optimizing data-flow in Binary Neural Networks

Binary Neural Networks (BNNs) can significantly accelerate the inference...

Please sign up or login with your details

Forgot password? Click here to reset