Progressive Learning of Low-Precision Networks

05/28/2019
by   Zhengguang Zhou, et al.
0

Recent years have witnessed the great advance of deep learning in a variety of vision tasks. Many state-of-the-art deep neural networks suffer from large size and high complexity, which makes it difficult to deploy in resource-limited platforms such as mobile devices. To this end, low-precision neural networks are widely studied which quantize weights or activations into the low-bit format. Though being efficient, low-precision networks are usually hard to train and encounter severe accuracy degradation. In this paper, we propose a new training strategy through expanding low-precision networks during training and removing the expanded parts for network inference. First, we equip each low-precision convolutional layer with an ancillary full-precision convolutional layer based on a low-precision network structure, which could guide the network to good local minima. Second, a decay method is introduced to reduce the output of the added full-precision convolution gradually, which keeps the resulted topology structure the same to the original low-precision one. Experiments on SVHN, CIFAR and ILSVRC-2012 datasets prove that the proposed method can bring faster convergence and higher accuracy for low-precision neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/10/2019

Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations

This paper tackles the problem of training a deep convolutional neural n...
research
11/01/2017

Towards Effective Low-bitwidth Convolutional Neural Networks

This paper tackles the problem of training a deep convolutional neural n...
research
03/25/2017

More is Less: A More Complicated Network with Less Inference Complexity

In this paper, we present a novel and general network structure towards ...
research
08/03/2022

PalQuant: Accelerating High-precision Networks on Low-precision Accelerators

Recently low-precision deep learning accelerators (DLAs) have become pop...
research
10/02/2018

Simultaneously Optimizing Weight and Quantizer of Ternary Neural Network using Truncated Gaussian Approximation

In the past years, Deep convolution neural network has achieved great su...
research
10/12/2018

Training Deep Neural Network in Limited Precision

Energy and resource efficient training of DNNs will greatly extend the a...
research
08/20/2022

DenseShift: Towards Accurate and Transferable Low-Bit Shift Network

Deploying deep neural networks on low-resource edge devices is challengi...

Please sign up or login with your details

Forgot password? Click here to reset