PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit

04/30/2021
by   Gonçalo Raposo, et al.
0

Low-precision formats have proven to be an efficient way to reduce not only the memory footprint but also the hardware resources and power consumption of deep learning computations. Under this premise, the posit numerical format appears to be a highly viable substitute for the IEEE floating-point, but its application to neural networks training still requires further research. Some preliminary results have shown that 8-bit (and even smaller) posits may be used for inference and 16-bit for training, while maintaining the model accuracy. The presented research aims to evaluate the feasibility to train deep convolutional neural networks using posits. For such purpose, a software framework was developed to use simulated posits and quires in end-to-end training and inference. This implementation allows using any bit size, configuration, and even mixed precision, suitable for different precision requirements in various stages. The obtained results suggest that 8-bit posits can substitute 32-bit floats during training with no negative impact on the resulting loss and accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2018

Deep Positron: A Deep Neural Network Using the Posit Number System

The recent surge of interest in Deep Neural Networks (DNNs) has led to i...
research
05/18/2023

Comparative Study: Standalone IEEE 16-bit Floating-Point for Image Classification

Reducing the number of bits needed to encode the weights and activations...
research
09/30/2022

Convolutional Neural Networks Quantization with Attention

It has been proven that, compared to using 32-bit floating-point numbers...
research
02/02/2021

Benchmarking Quantized Neural Networks on FPGAs with FINN

The ever-growing cost of both training and inference for state-of-the-ar...
research
07/30/2023

An Efficient Approach to Mitigate Numerical Instability in Backpropagation for 16-bit Neural Network Training

In this research, we delve into the intricacies of the numerical instabi...
research
10/13/2020

Revisiting BFloat16 Training

State-of-the-art generic low-precision training algorithms use a mix of ...
research
01/31/2023

Training with Mixed-Precision Floating-Point Assignments

When training deep neural networks, keeping all tensors in high precisio...

Please sign up or login with your details

Forgot password? Click here to reset