An Investigation on Inherent Robustness of Posit Data Representation

01/05/2021
by   Ihsen Alouani, et al.
0

As the dimensions and operating voltages of computer electronics shrink to cope with consumers' demand for higher performance and lower power consumption, circuit sensitivity to soft errors increases dramatically. Recently, a new data-type is proposed in the literature called posit data type. Posit arithmetic has absolute advantages such as higher numerical accuracy, speed, and simpler hardware design than IEEE 754-2008 technical standard-compliant arithmetic. In this paper, we propose a comparative robustness study between 32-bit posit and 32-bit IEEE 754-2008 compliant representations. At first, we propose a theoretical analysis for IEEE 754 compliant numbers and posit numbers for single bit flip and double bit flips. Then, we conduct exhaustive fault injection experiments that show a considerable inherent resilience in posit format compared to classical IEEE 754 compliant representation. To show a relevant use-case of fault-tolerant applications, we perform experiments on a set of machine-learning applications. In more than 95 injection exploration, posit representation is less impacted by faults than the IEEE 754 compliant floating-point representation. Moreover, in 100 tested machine-learning applications, the accuracy of posit-implemented systems is higher than the classical floating-point-based ones.

READ FULL TEXT

page 1

page 4

page 5

research
01/02/2017

The Unum Number Format: Mathematical Foundations, Implementation and Comparison to IEEE 754 Floating-Point Numbers

This thesis examines a modern concept for machine numbers based on inter...
research
05/18/2023

Comparative Study: Standalone IEEE 16-bit Floating-Point for Image Classification

Reducing the number of bits needed to encode the weights and activations...
research
04/10/2021

Fixed-Posit: A Floating-Point Representation for Error-Resilient Applications

Today, almost all computer systems use IEEE-754 floating point to repres...
research
09/11/2023

Compressed Real Numbers for AI: a case-study using a RISC-V CPU

As recently demonstrated, Deep Neural Networks (DNN), usually trained us...
research
04/12/2019

Leveraging the bfloat16 Artificial Intelligence Datatype For Higher-Precision Computations

In recent years fused-multiply-add (FMA) units with lower-precision mult...
research
01/17/2021

Brightening the Optical Flow through Posit Arithmetic

As new technologies are invented, their commercial viability needs to be...
research
09/16/2021

The Accuracy and Efficiency of Posit Arithmetic

Motivated by the increasing interest in the posit numeric format, in thi...

Please sign up or login with your details

Forgot password? Click here to reset