Reproducible Floating-Point Aggregation in RDBMSs

02/27/2018
by   Ingo Müller, et al.
0

Industry-grade database systems are expected to produce the same result if the same query is repeatedly run on the same input. However, the numerous sources of non-determinism in modern systems make reproducible results difficult to achieve. This is particularly true if floating-point numbers are involved, where the order of the operations affects the final result. As part of a larger effort to extend database engines with data representations more suitable for machine learning and scientific applications, in this paper we explore the problem of making relational GroupBy over floating-point formats bit-reproducible, i.e., ensuring any execution of the operator produces the same result up to every single bit. To that aim, we first propose a numeric data type that can be used as drop-in replacement for other number formats and is---unlike standard floating-point formats---associative. We use this data type to make state-of-the-art GroupBy operators reproducible, but this approach incurs a slowdown between 4x and 12x compared to the same operator using conventional database number formats. We thus explore how to modify existing GroupBy algorithms to make them bit-reproducible and efficient. By using vectorized summation on batches and carefully balancing batch size, cache footprint, and preprocessing costs, we are able to reduce the slowdown due to reproducibility to a factor between 1.9x and 2.4x of aggregation in isolation and to a mere 2.7 aggregation-intensive queries in MonetDB. We thereby provide a solid basis for supporting more reproducible operations directly in relational engines. This document is an extended version of an article currently in print for the proceedings of ICDE'18 with the same title and by the same authors. The main additions are more implementation details and experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2021

Number Parsing at a Gigabyte per Second

With disks and networks providing gigabytes per second, parsing decimal ...
research
09/09/2022

FLInt: Exploiting Floating Point Enabled Integer Arithmetic for Efficient Random Forest Inference

In many machine learning applications, e.g., tree-based ensembles, float...
research
02/13/2023

Fast evaluation and root finding for polynomials with floating-point coefficients

Evaluating or finding the roots of a polynomial f(z) = f_0 + ⋯ + f_d z^d...
research
05/11/2022

An Efficient Summation Algorithm for the Accuracy, Convergence and Reproducibility of Parallel Numerical Methods

Nowadays, parallel computing is ubiquitous in several application fields...
research
08/20/2017

Conversion of Mersenne Twister to double-precision floating-point numbers

The 32-bit Mersenne Twister generator MT19937 is a widely used random nu...
research
02/16/2021

Numerically more stable computation of the p-values for the two-sample Kolmogorov-Smirnov test

The two-sample Kolmogorov-Smirnov test is a widely used statistical test...
research
11/05/2020

Datasets for Benchmarking Floating-Point Compressors

Compression of floating-point data, both lossy and lossless, is a topic ...

Please sign up or login with your details

Forgot password? Click here to reset