Template-Based Posit Multiplication for Training and Inferring in Neural Networks

07/09/2019
by   Raúl Murillo Montero, et al.
0

The posit number system is arguably the most promising and discussed topic in Arithmetic nowadays. The recent breakthroughs claimed by the format proposed by John L. Gustafson have put posits in the spotlight. In this work, we first describe an algorithm for multiplying two posit numbers, even when the number of exponent bits is zero. This configuration, scarcely tackled in literature, is particularly interesting because it allows the deployment of a fast sigmoid function. The proposed multiplication algorithm is then integrated as a template into the well-known FloPoCo framework. Synthesis results are shown to compare with the floating point multiplication offered by FloPoCo as well. Second, the performance of posits is studied in the scenario of Neural Networks in both training and inference stages. To the best of our knowledge, this is the first time that training is done with posit format, achieving promising results for a binary classification problem even with reduced posit configurations. In the inference stage, 8-bit posits are as good as floating point when dealing with the MNIST dataset, but lose some accuracy with CIFAR-10.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

FP8 Quantization: The Power of the Exponent

When quantizing neural networks for efficient inference, low-bit integer...
research
08/30/2020

Floating-Point Multiplication Using Neuromorphic Computing

Neuromorphic computing describes the use of VLSI systems to mimic neuro-...
research
11/06/2017

Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Deep neural networks are commonly developed and trained in 32-bit floati...
research
04/15/2021

All-You-Can-Fit 8-Bit Flexible Floating-Point Format for Accurate and Memory-Efficient Inference of Deep Neural Networks

Modern deep neural network (DNN) models generally require a huge amount ...
research
12/07/2020

Deep Neural Network Training without Multiplications

Is multiplication really necessary for deep neural networks? Here we pro...
research
05/26/2020

Learning to map between ferns with differentiable binary embedding networks

Current deep learning methods are based on the repeated, expensive appli...
research
02/17/2020

Learning Architectures for Binary Networks

Backbone architectures of most binary networks are well-known floating p...

Please sign up or login with your details

Forgot password? Click here to reset