Monotonicity of Multi-Term Floating-Point Adders

04/03/2023
by   Mantas Mikaitis, et al.
0

In the literature on algorithms for performing the multi-term addition s_n=∑_i=1^n x_i using floating-point arithmetic it is often shown that a hardware unit that has single normalization and rounding improves precision, area, latency, and power consumption, compared with the use of standard add or fused multiply-add units. However, non-monotonicity can appear when computing sums with a subclass of multi-term addition units, which currently is not explored in the literature. We demonstrate that common techniques for performing multi-term addition with n≥ 4, without normalization of intermediate quantities, can result in non-monotonicity – increasing one of the addends x_i decreases the sum s_n. Summation is required in dot product and matrix multiplication operations, operations that have increasingly started appearing in the hardware of supercomputers, thus knowing where monotonicity is preserved can be of interest to the users of these machines. Our results suggest that non-monotonicity of summation, in some of the commercial hardware devices that implement a specific class of multi-term adders, is a feature that may have appeared unintentionally as a consequence of design choices that reduce circuit area and other metrics. To demonstrate our findings, we use formal proofs as well as a numerical simulation of non-monotonic multi-term adders in MATLAB.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/29/2021

Design and implementation of an out-of-order execution engine of floating-point arithmetic operations

In this thesis, work is undertaken towards the design in hardware descri...
research
01/14/2019

Faster arbitrary-precision dot product and matrix multiplication

We present algorithms for real and complex dot product and matrix multip...
research
04/04/2023

Reduced-Precision Floating-Point Arithmetic in Systolic Arrays with Skewed Pipelines

The acceleration of deep-learning kernels in hardware relies on matrix m...
research
07/07/2022

MiniFloat-NN and ExSdotp: An ISA Extension and a Modular Open Hardware Unit for Low-Precision Training on RISC-V cores

Low-precision formats have recently driven major breakthroughs in neural...
research
02/18/2021

PLAM: a Posit Logarithm-Approximate Multiplier for Power Efficient Posit-based DNNs

The Posit Number System was introduced in 2017 as a replacement for floa...
research
02/03/2023

PDPU: An Open-Source Posit Dot-Product Unit for Deep Learning Applications

Posit has been a promising alternative to the IEEE-754 floating point fo...
research
04/17/2020

Efficient, arbitrarily high precision hardware logarithmic arithmetic for linear algebra

The logarithmic number system (LNS) is arguably not broadly used due to ...

Please sign up or login with your details

Forgot password? Click here to reset