Conversion of Mersenne Twister to double-precision floating-point numbers

08/20/2017
by   Shin Harase, et al.
0

The 32-bit Mersenne Twister generator MT19937 is a widely used random number generator. To generate numbers with more than 32 bits in bit length, and particularly when converting into 53-bit double-precision floating-point numbers in [0,1) in the IEEE 754 format, the typical implementation concatenates two successive 32-bit integers and divides them by a power of 2. In this case, the 32-bit MT19937 is optimized in terms of its equidistribution properties (the so-called dimension of equidistribution with v-bit accuracy) under the assumption that one will mainly be using 32-bit output values, and hence the concatenation sometimes degrades the dimension of equidistribution compared with the simple use of 32-bit outputs. In this paper, we analyze such phenomena by investigating hidden F_2-linear relations among the bits of high-dimensional outputs. Accordingly, we report that MT19937 with a specific lag set fails several statistical tests, such as the overlapping collision test, matrix rank test, and Hamming independence test.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2018

Training Deep Neural Networks with 8-bit Floating Point Numbers

The state-of-the-art hardware platforms for training Deep Neural Network...
research
11/27/2018

Class of scalable parallel and vectorizable pseudorandom number generators based on non-cryptographic RSA exponentiation ciphers

Parallel supercomputer-based Monte Carlo and stochastic simulatons requi...
research
01/11/2021

Number Parsing at a Gigabyte per Second

With disks and networks providing gigabytes per second, parsing decimal ...
research
10/10/2018

Generalized Ziggurat Algorithm for Unimodal and Unbounded Probability Density Functions with Zest

We present a modified Ziggurat algorithm that could generate a random nu...
research
09/11/2023

Compressed Real Numbers for AI: a case-study using a RISC-V CPU

As recently demonstrated, Deep Neural Networks (DNN), usually trained us...
research
09/30/2022

Convolutional Neural Networks Quantization with Attention

It has been proven that, compared to using 32-bit floating-point numbers...
research
02/27/2018

Reproducible Floating-Point Aggregation in RDBMSs

Industry-grade database systems are expected to produce the same result ...

Please sign up or login with your details

Forgot password? Click here to reset