Cluster Regularized Quantization for Deep Networks Compression

02/27/2019
by   Yiming Hu, et al.
0

Deep neural networks (DNNs) have achieved great success in a wide range of computer vision areas, but the applications to mobile devices is limited due to their high storage and computational cost. Much efforts have been devoted to compress DNNs. In this paper, we propose a simple yet effective method for deep networks compression, named Cluster Regularized Quantization (CRQ), which can reduce the presentation precision of a full-precision model to ternary values without significant accuracy drop. In particular, the proposed method aims at reducing the quantization error by introducing a cluster regularization term, which is imposed on the full-precision weights to enable them naturally concentrate around the target values. Through explicitly regularizing the weights during the re-training stage, the full-precision model can achieve the smooth transition to the low-bit one. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of the proposed method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2021

Quantization-Guided Training for Compact TinyML Models

We propose a Quantization Guided Training (QGT) method to guide DNN trai...
research
09/01/2018

Learning Low Precision Deep Neural Networks through Regularization

We consider the quantization of deep neural networks (DNNs) to produce l...
research
03/22/2022

FxP-QNet: A Post-Training Quantizer for the Design of Mixed Low-Precision DNNs with Dynamic Fixed-Point Representation

Deep neural networks (DNNs) have demonstrated their effectiveness in a w...
research
09/09/2021

ECQ^x: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

The remarkable success of deep neural networks (DNNs) in various applica...
research
09/05/2021

Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss

Network quantization, which aims to reduce the bit-lengths of the networ...
research
12/18/2019

Neural Networks Weights Quantization: Target None-retraining Ternary (TNT)

Quantization of weights of deep neural networks (DNN) has proven to be a...
research
12/20/2018

SQuantizer: Simultaneous Learning for Both Sparse and Low-precision Neural Networks

Deep neural networks have achieved state-of-the-art accuracies in a wide...

Please sign up or login with your details

Forgot password? Click here to reset