Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM

07/24/2017
by   Cong Leng, et al.
0

Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2021

Low-bit Quantization of Recurrent Neural Network Language Models Using Alternating Direction Methods of Multipliers

The high memory consumption and computational costs of Recurrent neural ...
research
07/18/2022

Accelerating Deep Learning Model Inference on Arm CPUs with Ultra-Low Bit Quantization and Runtime

Deep Learning has been one of the most disruptive technological advancem...
research
06/15/2021

A White Paper on Neural Network Quantization

While neural networks have advanced the frontiers in many applications, ...
research
04/06/2016

Learning A Deep ℓ_∞ Encoder for Hashing

We investigate the ℓ_∞-constrained representation which demonstrates rob...
research
09/29/2019

REQ-YOLO: A Resource-Aware, Efficient Quantization Framework for Object Detection on FPGAs

Deep neural networks (DNNs), as the basis of object detection, will play...
research
05/02/2019

Toward Extremely Low Bit and Lossless Accuracy in DNNs with Progressive ADMM

Weight quantization is one of the most important techniques of Deep Neur...
research
06/07/2016

Latent Constrained Correlation Filters for Object Localization

There is a neglected fact in the traditional machine learning methods th...

Please sign up or login with your details

Forgot password? Click here to reset