Convolutional Neural Networks using Logarithmic Data Representation

03/03/2016
by   Daisuke Miyashita, et al.
0

Recent advances in convolutional neural networks have considered model complexity and hardware efficiency to enable deployment onto embedded systems and mobile devices. For example, it is now well-known that the arithmetic operations of deep networks can be encoded down to 8-bit fixed-point without significant deterioration in performance. However, further reduction in precision down to as low as 3-bit fixed-point results in significant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligible loss in classification performance. To perform this, we take advantage of the fact that the weights and activations in a trained network naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic representation to encode weights, communicate activations, and perform dot-products enables networks to 1) achieve higher classification accuracies than fixed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher final test accuracy than linear at 5-bits.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2018

Deep Learning Inference on Embedded Devices: Fixed-Point vs Posit

Performing the inference step of deep learning in resource constrained e...
research
05/16/2016

Reducing the Model Order of Deep Neural Networks Using Information Theory

Deep neural networks are typically represented by a much larger number o...
research
11/01/2017

Minimum Energy Quantized Neural Networks

This work targets the automated minimum-energy optimization of Quantized...
research
06/04/2020

Towards Lower Bit Multiplication for Convolutional Neural Network Training

Convolutional Neural Networks (CNNs) have been widely used in many field...
research
11/09/2020

PAMS: Quantized Super-Resolution via Parameterized Max Scale

Deep convolutional neural networks (DCNNs) have shown dominant performan...
research
02/13/2018

Training and Inference with Integers in Deep Neural Networks

Researches on deep neural networks with discrete parameters and their de...

Please sign up or login with your details

Forgot password? Click here to reset