DeepAI AI Chat
Log In Sign Up

Template-Based Posit Multiplication for Training and Inferring in Neural Networks

by   Raúl Murillo Montero, et al.
Universidad Complutense de Madrid

The posit number system is arguably the most promising and discussed topic in Arithmetic nowadays. The recent breakthroughs claimed by the format proposed by John L. Gustafson have put posits in the spotlight. In this work, we first describe an algorithm for multiplying two posit numbers, even when the number of exponent bits is zero. This configuration, scarcely tackled in literature, is particularly interesting because it allows the deployment of a fast sigmoid function. The proposed multiplication algorithm is then integrated as a template into the well-known FloPoCo framework. Synthesis results are shown to compare with the floating point multiplication offered by FloPoCo as well. Second, the performance of posits is studied in the scenario of Neural Networks in both training and inference stages. To the best of our knowledge, this is the first time that training is done with posit format, achieving promising results for a binary classification problem even with reduced posit configurations. In the inference stage, 8-bit posits are as good as floating point when dealing with the MNIST dataset, but lose some accuracy with CIFAR-10.


page 1

page 2

page 3

page 4


FP8 Quantization: The Power of the Exponent

When quantizing neural networks for efficient inference, low-bit integer...

Floating-Point Multiplication Using Neuromorphic Computing

Neuromorphic computing describes the use of VLSI systems to mimic neuro-...

Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks

Deep neural networks are commonly developed and trained in 32-bit floati...

Deep Neural Network Training without Multiplications

Is multiplication really necessary for deep neural networks? Here we pro...

Learning to map between ferns with differentiable binary embedding networks

Current deep learning methods are based on the repeated, expensive appli...

Learning Architectures for Binary Networks

Backbone architectures of most binary networks are well-known floating p...