Customizable Precision of Floating-Point Arithmetic with Bitslice Vector Types

02/15/2016
by   Shixiong Xu, et al.
0

Customizing the precision of data can provide attractive trade-offs between accuracy and hardware resources. We propose a novel form of vector computing aimed at arrays of custom-precision floating point data. We represent these vectors in bitslice format. Bitwise instructions are used to implement arithmetic circuits in software that operate on customized bit-precision. Experiments show that this approach can be efficient for vectors of low-precision custom floating point types, while providing arbitrary bit precision.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2020

HOBFLOPS CNNs: Hardware Optimized Bitsliced Floating-Point Operations Convolutional Neural Networks

Convolutional neural network (CNN) inference is commonly performed with ...
research
06/30/2023

Multigrid Methods using Block Floating Point Arithmetic

Block Floating Point (BFP) arithmetic is currently seeing a resurgence i...
research
01/26/2016

Vectorization of Multibyte Floating Point Data Formats

We propose a scheme for reduced-precision representation of floating poi...
research
07/03/2020

FPnew: An Open-Source Multi-Format Floating-Point Unit Architecture for Energy-Proportional Transprecision Computing

The slowdown of Moore's law and the power wall necessitates a shift towa...
research
07/18/2022

MCTensor: A High-Precision Deep Learning Library with Multi-Component Floating-Point

In this paper, we introduce MCTensor, a library based on PyTorch for pro...
research
05/30/2007

Computing Integer Powers in Floating-Point Arithmetic

We introduce two algorithms for accurately evaluating powers to a positi...
research
07/24/2019

Exploiting variable precision in GMRES

We describe how variable precision floating point arithmetic can be used...

Please sign up or login with your details

Forgot password? Click here to reset