Low-Precision Floating-Point Schemes for Neural Network Training

04/14/2018
by   Marc Ortiz, et al.
0

The use of low-precision fixed-point arithmetic along with stochastic rounding has been proposed as a promising alternative to the commonly used 32-bit floating point arithmetic to enhance training neural networks training in terms of performance and energy efficiency. In the first part of this paper, the behaviour of the 12-bit fixed-point arithmetic when training a convolutional neural network with the CIFAR-10 dataset is analysed, showing that such arithmetic is not the most appropriate for the training phase. After that, the paper presents and evaluates, under the same conditions, alternative low-precision arithmetics, starting with the 12-bit floating-point arithmetic. These two representations are then leveraged using local scaling in order to increase accuracy and get closer to the baseline 32-bit floating-point arithmetic. Finally, the paper introduces a simplified model in which both the outputs and the gradients of the neural networks are constrained to power-of-two values, just using 7 bits for their representation. The evaluation demonstrates a minimal loss in accuracy for the proposed Power-of-Two neural network, avoiding the use of multiplications and divisions and thereby, significantly reducing the training time as well as the energy consumption and memory requirements during the training and inference phases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/08/2022

Customizing Number Representation and Precision

There is a growing interest in the use of reduced-precision arithmetic, ...
research
12/22/2014

Training deep neural networks with low precision multiplications

Multipliers are the most space and power-hungry arithmetic operators of ...
research
09/28/2020

NITI: Training Integer Neural Networks Using Integer-only Arithmetic

While integer arithmetic has been widely adopted for improved performanc...
research
01/25/2018

Investigating the Effects of Dynamic Precision Scaling on Neural Network Training

Training neural networks is a time- and compute-intensive operation. Thi...
research
12/12/2016

Understanding the Impact of Precision Quantization on the Accuracy and Energy of Neural Networks

Deep neural networks are gaining in popularity as they are used to gener...
research
06/16/2020

Multi-Precision Policy Enforced Training (MuPPET): A precision-switching strategy for quantised fixed-point training of CNNs

Large-scale convolutional neural networks (CNNs) suffer from very long t...
research
06/15/2020

Neural gradients are lognormally distributed: understanding sparse and quantized training

Neural gradient compression remains a main bottleneck in improving train...

Please sign up or login with your details

Forgot password? Click here to reset